Compare commits

..

12 Commits

Author SHA1 Message Date
dgtlmoon
fba4b6747e Handle disabled situation 2022-05-20 16:03:10 +02:00
dgtlmoon
4cf2d9d7aa WIP on access 2022-05-20 16:00:34 +02:00
dgtlmoon
6bc1c681ec Work on handling auth token 2022-05-20 15:37:43 +02:00
dgtlmoon
1267512858 Adjust tests api_ -> form_ 2022-05-20 12:47:50 +02:00
dgtlmoon
886ef0c7c1 rename def api_ to def form_ because these arent API related 2022-05-20 12:44:07 +02:00
dgtlmoon
97c6db5e56 Code cleanup 2022-05-20 12:41:01 +02:00
dgtlmoon
23dde28399 More test coverage 2022-05-20 12:40:32 +02:00
dgtlmoon
91fe2dd420 bumping test 2022-05-20 09:33:35 +02:00
dgtlmoon
408c8878f3 bump old old API 2022-05-19 18:08:15 +02:00
dgtlmoon
37614224e5 WIP 2022-05-19 18:00:27 +02:00
dgtlmoon
1caff23d2c WIP 2022-05-19 15:47:36 +02:00
dgtlmoon
b01ee24d55 POC 2022-05-19 15:14:36 +02:00
49 changed files with 257 additions and 1367 deletions

View File

@@ -1,9 +1,9 @@
--- ---
name: Bug report name: Bug report
about: Create a bug report, if you don't follow this template, your report will be DELETED about: Create a report to help us improve
title: '' title: ''
labels: 'triage' labels: ''
assignees: 'dgtlmoon' assignees: ''
--- ---
@@ -11,18 +11,15 @@ assignees: 'dgtlmoon'
A clear and concise description of what the bug is. A clear and concise description of what the bug is.
**Version** **Version**
*Exact version* in the top right area: 0.... In the top right area: 0....
**To Reproduce** **To Reproduce**
Steps to reproduce the behavior: Steps to reproduce the behavior:
1. Go to '...' 1. Go to '...'
2. Click on '....' 2. Click on '....'
3. Scroll down to '....' 3. Scroll down to '....'
4. See error 4. See error
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE !
**Expected behavior** **Expected behavior**
A clear and concise description of what you expected to happen. A clear and concise description of what you expected to happen.

View File

@@ -1,8 +1,8 @@
--- ---
name: Feature request name: Feature request
about: Suggest an idea for this project about: Suggest an idea for this project
title: '[feature]' title: ''
labels: 'enhancement' labels: ''
assignees: '' assignees: ''
--- ---

1
.gitignore vendored
View File

@@ -8,6 +8,5 @@ __pycache__
build build
dist dist
venv venv
test-datastore
*.egg-info* *.egg-info*
.vscode/settings.json .vscode/settings.json

View File

@@ -1,4 +1,3 @@
recursive-include changedetectionio/api *
recursive-include changedetectionio/templates * recursive-include changedetectionio/templates *
recursive-include changedetectionio/static * recursive-include changedetectionio/static *
recursive-include changedetectionio/model * recursive-include changedetectionio/model *

View File

@@ -12,7 +12,7 @@ Live your data-life *pro-actively* instead of *re-actively*.
Free, Open-source web page monitoring, notification and change detection. Don't have time? [**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start) Free, Open-source web page monitoring, notification and change detection. Don't have time? [**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start)
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start) [<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start)
**Get your own private instance now! Let us host it for you!** **Get your own private instance now! Let us host it for you!**
@@ -48,19 +48,12 @@ _Need an actual Chrome runner with Javascript support? We support fetching via W
## Screenshots ## Screenshots
### Examine differences in content. Examining differences in content.
Easily see what changed, examine by word, line, or individual character. <img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/ Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
### Target elements with the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (available also as part of our subscription service)
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
## Installation ## Installation
@@ -136,7 +129,7 @@ Just some examples
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a> <a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" /> <img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
Now you can also customise your notification content! Now you can also customise your notification content!
@@ -144,11 +137,11 @@ Now you can also customise your notification content!
Detect changes and monitor data in JSON API's by using the built-in JSONPath selectors as a filter / selector. Detect changes and monitor data in JSON API's by using the built-in JSONPath selectors as a filter / selector.
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-filter-field-example.png) ![image](https://user-images.githubusercontent.com/275001/125165842-0ce01980-e1dc-11eb-9e73-d8137dd162dc.png)
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-diff-example.png) ![image](https://user-images.githubusercontent.com/275001/125165995-d9ea5580-e1dc-11eb-8030-f0deced2661a.png)
### Parse JSON embedded in HTML! ### Parse JSON embedded in HTML!
@@ -184,7 +177,7 @@ Or directly donate an amount PayPal [![Donate](https://img.shields.io/badge/Dona
Or BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn` Or BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/btc-support.png" style="max-width:50%;" alt="Support us!" /> <img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/btc-support.png" style="max-width:50%;" alt="Support us!" />
## Commercial Support ## Commercial Support

View File

Before

Width:  |  Height:  |  Size: 894 B

After

Width:  |  Height:  |  Size: 894 B

View File

@@ -20,7 +20,6 @@ from copy import deepcopy
from threading import Event from threading import Event
import flask_login import flask_login
import logging
import pytz import pytz
import timeago import timeago
from feedgen.feed import FeedGenerator from feedgen.feed import FeedGenerator
@@ -44,7 +43,7 @@ from flask_wtf import CSRFProtect
from changedetectionio import html_tools from changedetectionio import html_tools
from changedetectionio.api import api_v1 from changedetectionio.api import api_v1
__version__ = '0.39.14' __version__ = '0.39.13.1'
datastore = None datastore = None
@@ -179,10 +178,6 @@ def changedetection_app(config=None, datastore_o=None):
global datastore global datastore
datastore = datastore_o datastore = datastore_o
# so far just for read-only via tests, but this will be moved eventually to be the main source
# (instead of the global var)
app.config['DATASTORE']=datastore_o
#app.config.update(config or {}) #app.config.update(config or {})
login_manager = flask_login.LoginManager(app) login_manager = flask_login.LoginManager(app)
@@ -322,19 +317,25 @@ def changedetection_app(config=None, datastore_o=None):
for watch in sorted_watches: for watch in sorted_watches:
dates = list(watch.history.keys()) dates = list(watch['history'].keys())
# Re #521 - Don't bother processing this one if theres less than 2 snapshots, means we never had a change detected. # Re #521 - Don't bother processing this one if theres less than 2 snapshots, means we never had a change detected.
if len(dates) < 2: if len(dates) < 2:
continue continue
prev_fname = watch.history[dates[-2]] # Convert to int, sort and back to str again
# @todo replace datastore getter that does this automatically
dates = [int(i) for i in dates]
dates.sort(reverse=True)
dates = [str(i) for i in dates]
prev_fname = watch['history'][dates[1]]
if not watch.viewed: if not watch['viewed']:
# Re #239 - GUID needs to be individual for each event # Re #239 - GUID needs to be individual for each event
# @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228) # @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228)
guid = "{}/{}".format(watch['uuid'], watch['last_changed']) guid = "{}/{}".format(watch['uuid'], watch['last_changed'])
fe = fg.add_entry() fe = fg.add_entry()
# Include a link to the diff page, they will have to login here to see if password protection is enabled. # Include a link to the diff page, they will have to login here to see if password protection is enabled.
# Description is the page you watch, link takes you to the diff JS UI page # Description is the page you watch, link takes you to the diff JS UI page
base_url = datastore.data['settings']['application']['base_url'] base_url = datastore.data['settings']['application']['base_url']
@@ -349,15 +350,13 @@ def changedetection_app(config=None, datastore_o=None):
watch_title = watch.get('title') if watch.get('title') else watch.get('url') watch_title = watch.get('title') if watch.get('title') else watch.get('url')
fe.title(title=watch_title) fe.title(title=watch_title)
latest_fname = watch.history[dates[-1]] latest_fname = watch['history'][dates[0]]
html_diff = diff.render_diff(prev_fname, latest_fname, include_equal=False, line_feed_sep="</br>") html_diff = diff.render_diff(prev_fname, latest_fname, include_equal=False, line_feed_sep="</br>")
fe.description(description="<![CDATA[" fe.description(description="<![CDATA[<html><body><h4>{}</h4>{}</body></html>".format(watch_title, html_diff))
"<html><body><h4>{}</h4>{}</body></html>"
"]]>".format(watch_title, html_diff))
fe.guid(guid, permalink=False) fe.guid(guid, permalink=False)
dt = datetime.datetime.fromtimestamp(int(watch.newest_history_key)) dt = datetime.datetime.fromtimestamp(int(watch['newest_history_key']))
dt = dt.replace(tzinfo=pytz.UTC) dt = dt.replace(tzinfo=pytz.UTC)
fe.pubDate(dt) fe.pubDate(dt)
@@ -416,13 +415,11 @@ def changedetection_app(config=None, datastore_o=None):
tags=existing_tags, tags=existing_tags,
active_tag=limit_tag, active_tag=limit_tag,
app_rss_token=datastore.data['settings']['application']['rss_access_token'], app_rss_token=datastore.data['settings']['application']['rss_access_token'],
has_unviewed=datastore.has_unviewed, has_unviewed=datastore.data['has_unviewed'],
# Don't link to hosting when we're on the hosting environment # Don't link to hosting when we're on the hosting environment
hosted_sticky=os.getenv("SALTED_PASS", False) == False, hosted_sticky=os.getenv("SALTED_PASS", False) == False,
guid=datastore.data['app_guid'], guid=datastore.data['app_guid'],
queued_uuids=update_q.queue) queued_uuids=update_q.queue)
if session.get('share-link'): if session.get('share-link'):
del(session['share-link']) del(session['share-link'])
return output return output
@@ -494,10 +491,10 @@ def changedetection_app(config=None, datastore_o=None):
# 0 means that theres only one, so that there should be no 'unviewed' history available # 0 means that theres only one, so that there should be no 'unviewed' history available
if newest_history_key == 0: if newest_history_key == 0:
newest_history_key = list(datastore.data['watching'][uuid].history.keys())[0] newest_history_key = list(datastore.data['watching'][uuid]['history'].keys())[0]
if newest_history_key: if newest_history_key:
with open(datastore.data['watching'][uuid].history[newest_history_key], with open(datastore.data['watching'][uuid]['history'][newest_history_key],
encoding='utf-8') as file: encoding='utf-8') as file:
raw_content = file.read() raw_content = file.read()
@@ -591,12 +588,12 @@ def changedetection_app(config=None, datastore_o=None):
# Reset the previous_md5 so we process a new snapshot including stripping ignore text. # Reset the previous_md5 so we process a new snapshot including stripping ignore text.
if form_ignore_text: if form_ignore_text:
if len(datastore.data['watching'][uuid].history): if len(datastore.data['watching'][uuid]['history']):
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid) extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
# Reset the previous_md5 so we process a new snapshot including stripping ignore text. # Reset the previous_md5 so we process a new snapshot including stripping ignore text.
if form.css_filter.data.strip() != datastore.data['watching'][uuid]['css_filter']: if form.css_filter.data.strip() != datastore.data['watching'][uuid]['css_filter']:
if len(datastore.data['watching'][uuid].history): if len(datastore.data['watching'][uuid]['history']):
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid) extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
# Be sure proxy value is None # Be sure proxy value is None
@@ -629,12 +626,6 @@ def changedetection_app(config=None, datastore_o=None):
if request.method == 'POST' and not form.validate(): if request.method == 'POST' and not form.validate():
flash("An error occurred, please see below.", "error") flash("An error occurred, please see below.", "error")
visualselector_data_is_ready = datastore.visualselector_data_is_ready(uuid)
# Only works reliably with Playwright
visualselector_enabled = os.getenv('PLAYWRIGHT_DRIVER_URL', False) and default['fetch_backend'] == 'html_webdriver'
output = render_template("edit.html", output = render_template("edit.html",
uuid=uuid, uuid=uuid,
watch=datastore.data['watching'][uuid], watch=datastore.data['watching'][uuid],
@@ -642,9 +633,7 @@ def changedetection_app(config=None, datastore_o=None):
has_empty_checktime=using_default_check_time, has_empty_checktime=using_default_check_time,
using_global_webdriver_wait=default['webdriver_delay'] is None, using_global_webdriver_wait=default['webdriver_delay'] is None,
current_base_url=datastore.data['settings']['application']['base_url'], current_base_url=datastore.data['settings']['application']['base_url'],
emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False), emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False)
visualselector_data_is_ready=visualselector_data_is_ready,
visualselector_enabled=visualselector_enabled
) )
return output return output
@@ -751,14 +740,15 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
# Clear all statuses, so we do not see the 'unviewed' class # Clear all statuses, so we do not see the 'unviewed' class
@app.route("/form/mark-all-viewed", methods=['GET']) @app.route("/api/mark-all-viewed", methods=['GET'])
@login_required @login_required
def mark_all_viewed(): def mark_all_viewed():
# Save the current newest history as the most recently viewed # Save the current newest history as the most recently viewed
for watch_uuid, watch in datastore.data['watching'].items(): for watch_uuid, watch in datastore.data['watching'].items():
datastore.set_last_viewed(watch_uuid, int(time.time())) datastore.set_last_viewed(watch_uuid, watch['newest_history_key'])
flash("Cleared all statuses.")
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/diff/<string:uuid>", methods=['GET']) @app.route("/diff/<string:uuid>", methods=['GET'])
@@ -776,17 +766,20 @@ def changedetection_app(config=None, datastore_o=None):
flash("No history found for the specified link, bad link?", "error") flash("No history found for the specified link, bad link?", "error")
return redirect(url_for('index')) return redirect(url_for('index'))
history = watch.history dates = list(watch['history'].keys())
dates = list(history.keys()) # Convert to int, sort and back to str again
# @todo replace datastore getter that does this automatically
dates = [int(i) for i in dates]
dates.sort(reverse=True)
dates = [str(i) for i in dates]
if len(dates) < 2: if len(dates) < 2:
flash("Not enough saved change detection snapshots to produce a report.", "error") flash("Not enough saved change detection snapshots to produce a report.", "error")
return redirect(url_for('index')) return redirect(url_for('index'))
# Save the current newest history as the most recently viewed # Save the current newest history as the most recently viewed
datastore.set_last_viewed(uuid, time.time()) datastore.set_last_viewed(uuid, dates[0])
newest_file = watch['history'][dates[0]]
newest_file = history[dates[-1]]
try: try:
with open(newest_file, 'r') as f: with open(newest_file, 'r') as f:
@@ -796,10 +789,10 @@ def changedetection_app(config=None, datastore_o=None):
previous_version = request.args.get('previous_version') previous_version = request.args.get('previous_version')
try: try:
previous_file = history[previous_version] previous_file = watch['history'][previous_version]
except KeyError: except KeyError:
# Not present, use a default value, the second one in the sorted list. # Not present, use a default value, the second one in the sorted list.
previous_file = history[dates[-2]] previous_file = watch['history'][dates[1]]
try: try:
with open(previous_file, 'r') as f: with open(previous_file, 'r') as f:
@@ -816,7 +809,7 @@ def changedetection_app(config=None, datastore_o=None):
extra_stylesheets=extra_stylesheets, extra_stylesheets=extra_stylesheets,
versions=dates[1:], versions=dates[1:],
uuid=uuid, uuid=uuid,
newest_version_timestamp=dates[-1], newest_version_timestamp=dates[0],
current_previous_version=str(previous_version), current_previous_version=str(previous_version),
current_diff_url=watch['url'], current_diff_url=watch['url'],
extra_title=" - Diff - {}".format(watch['title'] if watch['title'] else watch['url']), extra_title=" - Diff - {}".format(watch['title'] if watch['title'] else watch['url']),
@@ -844,9 +837,9 @@ def changedetection_app(config=None, datastore_o=None):
flash("No history found for the specified link, bad link?", "error") flash("No history found for the specified link, bad link?", "error")
return redirect(url_for('index')) return redirect(url_for('index'))
if watch.history_n >0: if len(watch['history']):
timestamps = sorted(watch.history.keys(), key=lambda x: int(x)) timestamps = sorted(watch['history'].keys(), key=lambda x: int(x))
filename = watch.history[timestamps[-1]] filename = watch['history'][timestamps[-1]]
try: try:
with open(filename, 'r') as f: with open(filename, 'r') as f:
tmp = f.readlines() tmp = f.readlines()
@@ -983,9 +976,10 @@ def changedetection_app(config=None, datastore_o=None):
@app.route("/static/<string:group>/<string:filename>", methods=['GET']) @app.route("/static/<string:group>/<string:filename>", methods=['GET'])
def static_content(group, filename): def static_content(group, filename):
from flask import make_response
if group == 'screenshot': if group == 'screenshot':
from flask import make_response
# Could be sensitive, follow password requirements # Could be sensitive, follow password requirements
if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated: if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated:
abort(403) abort(403)
@@ -1004,26 +998,6 @@ def changedetection_app(config=None, datastore_o=None):
except FileNotFoundError: except FileNotFoundError:
abort(404) abort(404)
if group == 'visual_selector_data':
# Could be sensitive, follow password requirements
if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated:
abort(403)
# These files should be in our subdirectory
try:
# set nocache, set content-type
watch_dir = datastore_o.datastore_path + "/" + filename
response = make_response(send_from_directory(filename="elements.json", directory=watch_dir, path=watch_dir + "/elements.json"))
response.headers['Content-type'] = 'application/json'
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
response.headers['Pragma'] = 'no-cache'
response.headers['Expires'] = 0
return response
except FileNotFoundError:
abort(404)
# These files should be in our subdirectory # These files should be in our subdirectory
try: try:
return send_from_directory("static/{}".format(group), path=filename) return send_from_directory("static/{}".format(group), path=filename)
@@ -1140,7 +1114,6 @@ def changedetection_app(config=None, datastore_o=None):
# copy it to memory as trim off what we dont need (history) # copy it to memory as trim off what we dont need (history)
watch = deepcopy(datastore.data['watching'][uuid]) watch = deepcopy(datastore.data['watching'][uuid])
# For older versions that are not a @property
if (watch.get('history')): if (watch.get('history')):
del (watch['history']) del (watch['history'])
@@ -1170,14 +1143,14 @@ def changedetection_app(config=None, datastore_o=None):
except Exception as e: except Exception as e:
logging.error("Error sharing -{}".format(str(e))) flash("Could not share, something went wrong while communicating with the share server.", 'error')
flash("Could not share, something went wrong while communicating with the share server - {}".format(str(e)), 'error')
# https://changedetection.io/share/VrMv05wpXyQa # https://changedetection.io/share/VrMv05wpXyQa
# in the browser - should give you a nice info page - wtf # in the browser - should give you a nice info page - wtf
# paste in etc # paste in etc
return redirect(url_for('index')) return redirect(url_for('index'))
# @todo handle ctrl break # @todo handle ctrl break
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start() ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
@@ -1233,7 +1206,7 @@ def notification_runner():
notification.process_notification(n_object, datastore) notification.process_notification(n_object, datastore)
except Exception as e: except Exception as e:
logging.error("Watch URL: {} Error {}".format(n_object['watch_url'], str(e))) print("Watch URL: {} Error {}".format(n_object['watch_url'], str(e)))
# UUID wont be present when we submit a 'test' from the global settings # UUID wont be present when we submit a 'test' from the global settings
if 'uuid' in n_object: if 'uuid' in n_object:
@@ -1250,7 +1223,6 @@ def notification_runner():
# Thread runner to check every minute, look for new watches to feed into the Queue. # Thread runner to check every minute, look for new watches to feed into the Queue.
def ticker_thread_check_time_launch_checks(): def ticker_thread_check_time_launch_checks():
from changedetectionio import update_worker from changedetectionio import update_worker
import logging
# Spin up Workers that do the fetching # Spin up Workers that do the fetching
# Can be overriden by ENV or use the default settings # Can be overriden by ENV or use the default settings
@@ -1269,10 +1241,9 @@ def ticker_thread_check_time_launch_checks():
running_uuids.append(t.current_uuid) running_uuids.append(t.current_uuid)
# Re #232 - Deepcopy the data incase it changes while we're iterating through it all # Re #232 - Deepcopy the data incase it changes while we're iterating through it all
watch_uuid_list = []
while True: while True:
try: try:
watch_uuid_list = datastore.data['watching'].keys() copied_datastore = deepcopy(datastore)
except RuntimeError as e: except RuntimeError as e:
# RuntimeError: dictionary changed size during iteration # RuntimeError: dictionary changed size during iteration
time.sleep(0.1) time.sleep(0.1)
@@ -1289,12 +1260,7 @@ def ticker_thread_check_time_launch_checks():
recheck_time_minimum_seconds = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60)) recheck_time_minimum_seconds = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
recheck_time_system_seconds = datastore.threshold_seconds recheck_time_system_seconds = datastore.threshold_seconds
for uuid in watch_uuid_list: for uuid, watch in copied_datastore.data['watching'].items():
watch = datastore.data['watching'].get(uuid)
if not watch:
logging.error("Watch: {} no longer present.".format(uuid))
continue
# No need todo further processing if it's paused # No need todo further processing if it's paused
if watch['paused']: if watch['paused']:

View File

@@ -28,7 +28,8 @@ class Watch(Resource):
return "OK", 200 return "OK", 200
# Return without history, get that via another API call # Return without history, get that via another API call
watch['history_n'] = watch.history_n watch['history_n'] = len(watch['history'])
del (watch['history'])
return watch return watch
@auth.check_token @auth.check_token
@@ -51,7 +52,7 @@ class WatchHistory(Resource):
watch = self.datastore.data['watching'].get(uuid) watch = self.datastore.data['watching'].get(uuid)
if not watch: if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid)) abort(404, message='No watch exists with the UUID of {}'.format(uuid))
return watch.history, 200 return watch['history'], 200
class WatchSingleHistory(Resource): class WatchSingleHistory(Resource):
@@ -68,13 +69,13 @@ class WatchSingleHistory(Resource):
if not watch: if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid)) abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if not len(watch.history): if not len(watch['history']):
abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid)) abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid))
if timestamp == 'latest': if timestamp == 'latest':
timestamp = list(watch.history.keys())[-1] timestamp = list(watch['history'].keys())[-1]
with open(watch.history[timestamp], 'r') as f: with open(watch['history'][timestamp], 'r') as f:
content = f.read() content = f.read()
response = make_response(content, 200) response = make_response(content, 200)

View File

@@ -1,19 +1,10 @@
from abc import ABC, abstractmethod from abc import ABC, abstractmethod
import chardet import chardet
import json
import os import os
import requests import requests
import time import time
import sys import sys
class PageUnloadable(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class EmptyReply(Exception): class EmptyReply(Exception):
def __init__(self, status_code, url): def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -22,14 +13,6 @@ class EmptyReply(Exception):
return return
pass pass
class ScreenshotUnavailable(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class ReplyWithContentButNoText(Exception): class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url): def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -44,135 +27,6 @@ class Fetcher():
status_code = None status_code = None
content = None content = None
headers = None headers = None
fetcher_description = "No description"
xpath_element_js = """
// Include the getXpath script directly, easier than fetching
!function(e,n){"object"==typeof exports&&"undefined"!=typeof module?module.exports=n():"function"==typeof define&&define.amd?define(n):(e=e||self).getXPath=n()}(this,function(){return function(e){var n=e;if(n&&n.id)return'//*[@id="'+n.id+'"]';for(var o=[];n&&Node.ELEMENT_NODE===n.nodeType;){for(var i=0,r=!1,d=n.previousSibling;d;)d.nodeType!==Node.DOCUMENT_TYPE_NODE&&d.nodeName===n.nodeName&&i++,d=d.previousSibling;for(d=n.nextSibling;d;){if(d.nodeName===n.nodeName){r=!0;break}d=d.nextSibling}o.push((n.prefix?n.prefix+":":"")+n.localName+(i||r?"["+(i+1)+"]":"")),n=n.parentNode}return o.length?"/"+o.reverse().join("/"):""}});
const findUpTag = (el) => {
let r = el
chained_css = [];
depth=0;
// Strategy 1: Keep going up until we hit an ID tag, imagine it's like #list-widget div h4
while (r.parentNode) {
if(depth==5) {
break;
}
if('' !==r.id) {
chained_css.unshift("#"+r.id);
final_selector= chained_css.join('>');
// Be sure theres only one, some sites have multiples of the same ID tag :-(
if (window.document.querySelectorAll(final_selector).length ==1 ) {
return final_selector;
}
return null;
} else {
chained_css.unshift(r.tagName.toLowerCase());
}
r=r.parentNode;
depth+=1;
}
return null;
}
// @todo - if it's SVG or IMG, go into image diff mode
var elements = window.document.querySelectorAll("div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary");
var size_pos=[];
// after page fetch, inject this JS
// build a map of all elements and their positions (maybe that only include text?)
var bbox;
for (var i = 0; i < elements.length; i++) {
bbox = elements[i].getBoundingClientRect();
// forget really small ones
if (bbox['width'] <20 && bbox['height'] < 20 ) {
continue;
}
// @todo the getXpath kind of sucks, it doesnt know when there is for example just one ID sometimes
// it should not traverse when we know we can anchor off just an ID one level up etc..
// maybe, get current class or id, keep traversing up looking for only class or id until there is just one match
// 1st primitive - if it has class, try joining it all and select, if theres only one.. well thats us.
xpath_result=false;
try {
var d= findUpTag(elements[i]);
if (d) {
xpath_result =d;
}
} catch (e) {
console.log(e);
}
// You could swap it and default to getXpath and then try the smarter one
// default back to the less intelligent one
if (!xpath_result) {
try {
// I've seen on FB and eBay that this doesnt work
// ReferenceError: getXPath is not defined at eval (eval at evaluate (:152:29), <anonymous>:67:20) at UtilityScript.evaluate (<anonymous>:159:18) at UtilityScript.<anonymous> (<anonymous>:1:44)
xpath_result = getXPath(elements[i]);
} catch (e) {
console.log(e);
continue;
}
}
if(window.getComputedStyle(elements[i]).visibility === "hidden") {
continue;
}
size_pos.push({
xpath: xpath_result,
width: Math.round(bbox['width']),
height: Math.round(bbox['height']),
left: Math.floor(bbox['left']),
top: Math.floor(bbox['top']),
childCount: elements[i].childElementCount
});
}
// inject the current one set in the css_filter, which may be a CSS rule
// used for displaying the current one in VisualSelector, where its not one we generated.
if (css_filter.length) {
q=false;
try {
// is it xpath?
if (css_filter.startsWith('/') || css_filter.startsWith('xpath:')) {
q=document.evaluate(css_filter.replace('xpath:',''), document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
} else {
q=document.querySelector(css_filter);
}
} catch (e) {
// Maybe catch DOMException and alert?
console.log(e);
}
bbox=false;
if(q) {
bbox = q.getBoundingClientRect();
}
if (bbox && bbox['width'] >0 && bbox['height']>0) {
size_pos.push({
xpath: css_filter,
width: bbox['width'],
height: bbox['height'],
left: bbox['left'],
top: bbox['top'],
childCount: q.childElementCount
});
}
}
// Window.width required for proper scaling in the frontend
return {'size_pos':size_pos, 'browser_width': window.innerWidth};
"""
xpath_data = None
# Will be needed in the future by the VisualSelector, always get this where possible. # Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False screenshot = False
fetcher_description = "No description" fetcher_description = "No description"
@@ -193,8 +47,7 @@ class Fetcher():
request_headers, request_headers,
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False):
current_css_filter=None):
# Should set self.error, self.status_code and self.content # Should set self.error, self.status_code and self.content
pass pass
@@ -275,8 +128,7 @@ class base_html_playwright(Fetcher):
request_headers, request_headers,
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False):
current_css_filter=None):
from playwright.sync_api import sync_playwright from playwright.sync_api import sync_playwright
import playwright._impl._api_types import playwright._impl._api_types
@@ -293,16 +145,11 @@ class base_html_playwright(Fetcher):
# Use the default one configured in the App.py model that's passed from fetch_site_status.py # Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context( context = browser.new_context(
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0', user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
proxy=self.proxy, proxy=self.proxy
# This is needed to enable JavaScript execution on GitHub and others
bypass_csp=True,
# Should never be needed
accept_downloads=False
) )
page = context.new_page() page = context.new_page()
page.set_viewport_size({"width": 1280, "height": 1024})
try: try:
# Bug - never set viewport size BEFORE page.goto
response = page.goto(url, timeout=timeout * 1000, wait_until='commit') response = page.goto(url, timeout=timeout * 1000, wait_until='commit')
# Wait_until = commit # Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading. # - `'commit'` - consider operation to be finished when network response is received and the document started loading.
@@ -311,49 +158,22 @@ class base_html_playwright(Fetcher):
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
page.wait_for_timeout(extra_wait * 1000) page.wait_for_timeout(extra_wait * 1000)
except playwright._impl._api_types.TimeoutError as e: except playwright._impl._api_types.TimeoutError as e:
context.close()
browser.close()
raise EmptyReply(url=url, status_code=None) raise EmptyReply(url=url, status_code=None)
except Exception as e:
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None)
if response is None: if response is None:
context.close()
browser.close()
raise EmptyReply(url=url, status_code=None) raise EmptyReply(url=url, status_code=None)
if len(page.content().strip()) == 0: if len(page.content().strip()) == 0:
context.close()
browser.close()
raise EmptyReply(url=url, status_code=None) raise EmptyReply(url=url, status_code=None)
# Bug 2(?) Set the viewport size AFTER loading the page
page.set_viewport_size({"width": 1280, "height": 1024})
self.status_code = response.status self.status_code = response.status
self.content = page.content() self.content = page.content()
self.headers = response.all_headers() self.headers = response.all_headers()
if current_css_filter is not None:
page.evaluate("var css_filter={}".format(json.dumps(current_css_filter)))
else:
page.evaluate("var css_filter=''")
self.xpath_data = page.evaluate("async () => {" + self.xpath_element_js + "}")
# Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it # Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large # JPEG is better here because the screenshots can be very very large
try: page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024})
page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024}) self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=90)
self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=92)
except Exception as e:
context.close()
browser.close()
raise ScreenshotUnavailable(url=url, status_code=None)
context.close() context.close()
browser.close() browser.close()
@@ -405,8 +225,7 @@ class base_html_webdriver(Fetcher):
request_headers, request_headers,
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False):
current_css_filter=None):
from selenium import webdriver from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
@@ -426,10 +245,6 @@ class base_html_webdriver(Fetcher):
self.quit() self.quit()
raise raise
self.driver.set_window_size(1280, 1024)
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
self.screenshot = self.driver.get_screenshot_as_png()
# @todo - how to check this? is it possible? # @todo - how to check this? is it possible?
self.status_code = 200 self.status_code = 200
# @todo somehow we should try to get this working for WebDriver # @todo somehow we should try to get this working for WebDriver
@@ -439,6 +254,8 @@ class base_html_webdriver(Fetcher):
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay) time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
self.content = self.driver.page_source self.content = self.driver.page_source
self.headers = {} self.headers = {}
self.screenshot = self.driver.get_screenshot_as_png()
self.quit()
# Does the connection to the webdriver work? run a test connection. # Does the connection to the webdriver work? run a test connection.
def is_ready(self): def is_ready(self):
@@ -475,8 +292,7 @@ class html_requests(Fetcher):
request_headers, request_headers,
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False):
current_css_filter=None):
proxies={} proxies={}

View File

@@ -94,7 +94,6 @@ class perform_site_check():
# If the klass doesnt exist, just use a default # If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests") klass = getattr(content_fetcher, "html_requests")
proxy_args = self.set_proxy_from_list(watch) proxy_args = self.set_proxy_from_list(watch)
fetcher = klass(proxy_override=proxy_args) fetcher = klass(proxy_override=proxy_args)
@@ -105,8 +104,7 @@ class perform_site_check():
elif system_webdriver_delay is not None: elif system_webdriver_delay is not None:
fetcher.render_extract_delay = system_webdriver_delay fetcher.render_extract_delay = system_webdriver_delay
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code, watch['css_filter']) fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code)
fetcher.quit()
# Fetching complete, now filters # Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base? # @todo move to class / maybe inside of fetcher abstract base?
@@ -204,20 +202,6 @@ class perform_site_check():
else: else:
stripped_text_from_html = stripped_text_from_html.encode('utf8') stripped_text_from_html = stripped_text_from_html.encode('utf8')
# 615 Extract text by regex
extract_text = watch.get('extract_text', [])
if len(extract_text) > 0:
regex_matched_output = []
for s_re in extract_text:
result = re.findall(s_re.encode('utf8'), stripped_text_from_html,
flags=re.MULTILINE | re.DOTALL | re.LOCALE)
if result:
regex_matched_output.append(result[0])
if regex_matched_output:
stripped_text_from_html = b'\n'.join(regex_matched_output)
text_content_before_ignored_filter = stripped_text_from_html
# Re #133 - if we should strip whitespaces from triggering the change detected comparison # Re #133 - if we should strip whitespaces from triggering the change detected comparison
if self.datastore.data['settings']['application'].get('ignore_whitespace', False): if self.datastore.data['settings']['application'].get('ignore_whitespace', False):
fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest() fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest()
@@ -235,11 +219,9 @@ class perform_site_check():
# Yeah, lets block first until something matches # Yeah, lets block first until something matches
blocked_by_not_found_trigger_text = True blocked_by_not_found_trigger_text = True
# Filter and trigger works the same, so reuse it # Filter and trigger works the same, so reuse it
# It should return the line numbers that match
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html), result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=watch['trigger_text'], wordlist=watch['trigger_text'],
mode="line numbers") mode="line numbers")
# If it returned any lines that matched..
if result: if result:
blocked_by_not_found_trigger_text = False blocked_by_not_found_trigger_text = False
@@ -254,4 +236,4 @@ class perform_site_check():
if not watch['title'] or not len(watch['title']): if not watch['title'] or not len(watch['title']):
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content) update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot, fetcher.xpath_data return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot

View File

@@ -223,7 +223,7 @@ class validateURL(object):
except validators.ValidationFailure: except validators.ValidationFailure:
message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip())) message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip()))
raise ValidationError(message) raise ValidationError(message)
class ValidateListRegex(object): class ValidateListRegex(object):
""" """
Validates that anything that looks like a regex passes as a regex Validates that anything that looks like a regex passes as a regex
@@ -307,7 +307,7 @@ class ValidateCSSJSONXPATHInput(object):
class quickWatchForm(Form): class quickWatchForm(Form):
url = fields.URLField('URL', validators=[validateURL()]) url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()]) tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)])
# Common to a single watch and the global settings # Common to a single watch and the global settings
class commonSettingsForm(Form): class commonSettingsForm(Form):
@@ -323,16 +323,13 @@ class commonSettingsForm(Form):
class watchForm(commonSettingsForm): class watchForm(commonSettingsForm):
url = fields.URLField('URL', validators=[validateURL()]) url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()], default='') tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)], default='')
time_between_check = FormField(TimeBetweenCheckForm) time_between_check = FormField(TimeBetweenCheckForm)
css_filter = StringField('CSS/JSON/XPATH Filter', [ValidateCSSJSONXPATHInput()], default='') css_filter = StringField('CSS/JSON/XPATH Filter', [ValidateCSSJSONXPATHInput()], default='')
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)]) subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
extract_text = StringListField('Extract text', [ValidateListRegex()])
title = StringField('Title', default='') title = StringField('Title', default='')
ignore_text = StringListField('Ignore text', [ValidateListRegex()]) ignore_text = StringListField('Ignore text', [ValidateListRegex()])

View File

@@ -39,7 +39,7 @@ def element_removal(selectors: List[str], html_content):
def xpath_filter(xpath_filter, html_content): def xpath_filter(xpath_filter, html_content):
from lxml import etree, html from lxml import etree, html
tree = html.fromstring(bytes(html_content, encoding='utf-8')) tree = html.fromstring(html_content)
html_block = "" html_block = ""
for item in tree.xpath(xpath_filter.strip(), namespaces={'re':'http://exslt.org/regular-expressions'}): for item in tree.xpath(xpath_filter.strip(), namespaces={'re':'http://exslt.org/regular-expressions'}):

View File

@@ -92,7 +92,7 @@ class import_distill_io_json(Importer):
for d in data.get('data'): for d in data.get('data'):
d_config = json.loads(d['config']) d_config = json.loads(d['config'])
extras = {'title': d.get('name', None)} extras = {'title': d['name']}
if len(d['uri']) and good < 5000: if len(d['uri']) and good < 5000:
try: try:
@@ -114,9 +114,12 @@ class import_distill_io_json(Importer):
except IndexError: except IndexError:
pass pass
try:
if d.get('tags', False):
extras['tag'] = ", ".join(d['tags']) extras['tag'] = ", ".join(d['tags'])
except KeyError:
pass
except IndexError:
pass
new_uuid = datastore.add_watch(url=d['uri'].strip(), new_uuid = datastore.add_watch(url=d['uri'].strip(),
extras=extras, extras=extras,

View File

@@ -35,7 +35,7 @@ class model(dict):
'fetch_backend': os.getenv("DEFAULT_FETCH_BACKEND", "html_requests"), 'fetch_backend': os.getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum 'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [], 'global_subtractive_selectors': [],
'ignore_whitespace': True, 'ignore_whitespace': False,
'render_anchor_tag_content': False, 'render_anchor_tag_content': False,
'notification_urls': [], # Apprise URL list 'notification_urls': [], # Apprise URL list
# Custom notification content # Custom notification content

View File

@@ -1,4 +1,5 @@
import os import os
import uuid as uuid_builder import uuid as uuid_builder
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60)) minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
@@ -11,32 +12,29 @@ from changedetectionio.notification import (
class model(dict): class model(dict):
__newest_history_key = None base_config = {
__history_n=0
__base_config = {
'url': None, 'url': None,
'tag': None, 'tag': None,
'last_checked': 0, 'last_checked': 0,
'last_changed': 0, 'last_changed': 0,
'paused': False, 'paused': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link 'last_viewed': 0, # history key value of the last viewed via the [diff] link
#'newest_history_key': 0, 'newest_history_key': 0,
'title': None, 'title': None,
'previous_md5': False, 'previous_md5': False,
'uuid': str(uuid_builder.uuid4()), # UUID not needed, should be generated only as a key
# 'uuid':
'headers': {}, # Extra headers to send 'headers': {}, # Extra headers to send
'body': None, 'body': None,
'method': 'GET', 'method': 'GET',
#'history': {}, # Dict of timestamp and output stripped filename 'history': {}, # Dict of timestamp and output stripped filename
'ignore_text': [], # List of text to ignore when calculating the comparison checksum 'ignore_text': [], # List of text to ignore when calculating the comparison checksum
# Custom notification content # Custom notification content
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise) 'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'notification_title': default_notification_title, 'notification_title': default_notification_title,
'notification_body': default_notification_body, 'notification_body': default_notification_body,
'notification_format': default_notification_format, 'notification_format': default_notification_format,
'css_filter': '', 'css_filter': "",
'extract_text': [], # Extract text by regex after filters
'subtractive_selectors': [], 'subtractive_selectors': [],
'trigger_text': [], # List of text or regex to wait for until a change is detected 'trigger_text': [], # List of text or regex to wait for until a change is detected
'fetch_backend': None, 'fetch_backend': None,
@@ -50,103 +48,10 @@ class model(dict):
} }
def __init__(self, *arg, **kw): def __init__(self, *arg, **kw):
import uuid self.update(self.base_config)
self.update(self.__base_config)
self.__datastore_path = kw['datastore_path']
self['uuid'] = str(uuid.uuid4())
del kw['datastore_path']
if kw.get('default'):
self.update(kw['default'])
del kw['default']
# goes at the end so we update the default object with the initialiser # goes at the end so we update the default object with the initialiser
super(model, self).__init__(*arg, **kw) super(model, self).__init__(*arg, **kw)
@property
def viewed(self):
if int(self['last_viewed']) >= int(self.newest_history_key) :
return True
return False
@property
def history_n(self):
return self.__history_n
@property
def history(self):
tmp_history = {}
import logging
import time
# Read the history file as a dict
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
if os.path.isfile(fname):
logging.debug("Disk IO accessed " + str(time.time()))
with open(fname, "r") as f:
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
if len(tmp_history):
self.__newest_history_key = list(tmp_history.keys())[-1]
self.__history_n = len(tmp_history)
return tmp_history
@property
def has_history(self):
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
return os.path.isfile(fname)
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
@property
def newest_history_key(self):
if self.__newest_history_key is not None:
return self.__newest_history_key
if len(self.history) <= 1:
return 0
bump = self.history
return self.__newest_history_key
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, contents, timestamp):
import uuid
from os import mkdir, path, unlink
import logging
output_path = "{}/{}".format(self.__datastore_path, self['uuid'])
# Incase the operator deleted it, check and create.
if not os.path.isdir(output_path):
mkdir(output_path)
snapshot_fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
logging.debug("Saving history text {}".format(snapshot_fname))
with open(snapshot_fname, 'wb') as f:
f.write(contents)
f.close()
# Append to index
# @todo check last char was \n
index_fname = "{}/history.txt".format(output_path)
with open(index_fname, 'a') as f:
f.write("{},{}\n".format(timestamp, snapshot_fname))
f.close()
self.__newest_history_key = timestamp
self.__history_n+=1
#@todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
return snapshot_fname
@property @property
def has_empty_checktime(self): def has_empty_checktime(self):

View File

@@ -22,26 +22,3 @@ echo "RUNNING WITH BASE_URL SET"
export BASE_URL="https://really-unique-domain.io" export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py pytest tests/test_notification.py
# Now for the selenium and playwright/browserless fetchers
# Note - this is not UI functional tests - just checking that each one can fetch the content
echo "TESTING WEBDRIVER FETCH > SELENIUM/WEBDRIVER..."
docker run -d --name $$-test_selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
# takes a while to spin up
sleep 5
export WEBDRIVER_URL=http://localhost:4444/wd/hub
pytest tests/fetchers/test_content.py
unset WEBDRIVER_URL
docker kill $$-test_selenium
echo "TESTING WEBDRIVER FETCH > PLAYWRIGHT/BROWSERLESS..."
# Not all platforms support playwright (not ARM/rPI), so it's not packaged in requirements.txt
pip3 install playwright~=1.22
docker run -d --name $$-test_browserless -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
# takes a while to spin up
sleep 5
export PLAYWRIGHT_DRIVER_URL=ws://127.0.0.1:3000
pytest tests/fetchers/test_content.py
unset PLAYWRIGHT_DRIVER_URL
docker kill $$-test_browserless

Binary file not shown.

Before

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -1,56 +0,0 @@
/**
* debounce
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait after the last call before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds after the last call before
* calling the original function.
*/
Function.prototype.debounce = function (milliseconds, context) {
var baseFunction = this,
timer = null,
wait = milliseconds;
return function () {
var self = context || this,
args = arguments;
function complete() {
baseFunction.apply(self, args);
timer = null;
}
if (timer) {
clearTimeout(timer);
}
timer = setTimeout(complete, wait);
};
};
/**
* throttle
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait between calls before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds between calls before
* calling the original function.
*/
Function.prototype.throttle = function (milliseconds, context) {
var baseFunction = this,
lastEventTimestamp = null,
limit = milliseconds;
return function () {
var self = context || this,
args = arguments,
now = Date.now();
if (!lastEventTimestamp || now - lastEventTimestamp >= limit) {
lastEventTimestamp = now;
baseFunction.apply(self, args);
}
};
};

View File

@@ -1,230 +0,0 @@
// Horrible proof of concept code :)
// yes - this is really a hack, if you are a front-ender and want to help, please get in touch!
$(document).ready(function() {
var current_selected_i;
var state_clicked=false;
var c;
// greyed out fill context
var xctx;
// redline highlight context
var ctx;
var current_default_xpath;
var x_scale=1;
var y_scale=1;
var selector_image;
var selector_image_rect;
var selector_data;
$('#visualselector-tab').click(function () {
$("img#selector-background").off('load');
state_clicked = false;
current_selected_i = false;
bootstrap_visualselector();
});
$(document).on('keydown', function(event) {
if ($("img#selector-background").is(":visible")) {
if (event.key == "Escape") {
state_clicked=false;
ctx.clearRect(0, 0, c.width, c.height);
}
}
});
// For when the page loads
if(!window.location.hash || window.location.hash != '#visualselector') {
$("img#selector-background").attr('src','');
return;
}
// Handle clearing button/link
$('#clear-selector').on('click', function(event) {
if(!state_clicked) {
alert('Oops, Nothing selected!');
}
state_clicked=false;
ctx.clearRect(0, 0, c.width, c.height);
xctx.clearRect(0, 0, c.width, c.height);
$("#css_filter").val('');
});
bootstrap_visualselector();
function bootstrap_visualselector() {
if ( 1 ) {
// bootstrap it, this will trigger everything else
$("img#selector-background").bind('load', function () {
console.log("Loaded background...");
c = document.getElementById("selector-canvas");
// greyed out fill context
xctx = c.getContext("2d");
// redline highlight context
ctx = c.getContext("2d");
current_default_xpath =$("#css_filter").val();
fetch_data();
$('#selector-canvas').off("mousemove mousedown");
// screenshot_url defined in the edit.html template
}).attr("src", screenshot_url);
}
}
function fetch_data() {
// Image is ready
$('.fetching-update-notice').html("Fetching element data..");
$.ajax({
url: watch_visual_selector_data_url,
context: document.body
}).done(function (data) {
$('.fetching-update-notice').html("Rendering..");
selector_data = data;
console.log("Reported browser width from backend: "+data['browser_width']);
state_clicked=false;
set_scale();
reflow_selector();
$('.fetching-update-notice').fadeOut();
});
};
function set_scale() {
// some things to check if the scaling doesnt work
// - that the widths/sizes really are about the actual screen size cat elements.json |grep -o width......|sort|uniq
selector_image = $("img#selector-background")[0];
selector_image_rect = selector_image.getBoundingClientRect();
// make the canvas the same size as the image
$('#selector-canvas').attr('height', selector_image_rect.height);
$('#selector-canvas').attr('width', selector_image_rect.width);
$('#selector-wrapper').attr('width', selector_image_rect.width);
x_scale = selector_image_rect.width / selector_data['browser_width'];
y_scale = selector_image_rect.height / selector_image.naturalHeight;
ctx.strokeStyle = 'rgba(255,0,0, 0.9)';
ctx.fillStyle = 'rgba(255,0,0, 0.1)';
ctx.lineWidth = 3;
console.log("scaling set x: "+x_scale+" by y:"+y_scale);
$("#selector-current-xpath").css('max-width', selector_image_rect.width);
}
function reflow_selector() {
$(window).resize(function() {
set_scale();
highlight_current_selected_i();
});
var selector_currnt_xpath_text=$("#selector-current-xpath span");
set_scale();
console.log(selector_data['size_pos'].length + " selectors found");
// highlight the default one if we can find it in the xPath list
// or the xpath matches the default one
found = false;
if(current_default_xpath.length) {
for (var i = selector_data['size_pos'].length; i!==0; i--) {
var sel = selector_data['size_pos'][i-1];
if(selector_data['size_pos'][i - 1].xpath == current_default_xpath) {
console.log("highlighting "+current_default_xpath);
current_selected_i = i-1;
highlight_current_selected_i();
found = true;
break;
}
}
if(!found) {
alert("Unfortunately your existing CSS/xPath Filter was no longer found!");
}
}
$('#selector-canvas').bind('mousemove', function (e) {
if(state_clicked) {
return;
}
ctx.clearRect(0, 0, c.width, c.height);
current_selected_i=null;
// Add in offset
if ((typeof e.offsetX === "undefined" || typeof e.offsetY === "undefined") || (e.offsetX === 0 && e.offsetY === 0)) {
var targetOffset = $(e.target).offset();
e.offsetX = e.pageX - targetOffset.left;
e.offsetY = e.pageY - targetOffset.top;
}
// Reverse order - the most specific one should be deeper/"laster"
// Basically, find the most 'deepest'
var found=0;
ctx.fillStyle = 'rgba(205,0,0,0.35)';
for (var i = selector_data['size_pos'].length; i!==0; i--) {
// draw all of them? let them choose somehow?
var sel = selector_data['size_pos'][i-1];
// If we are in a bounding-box
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale
&&
e.offsetX > sel.left * y_scale && e.offsetX < sel.left * y_scale + sel.width * y_scale
) {
// FOUND ONE
set_current_selected_text(sel.xpath);
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
// no need to keep digging
// @todo or, O to go out/up, I to go in
// or double click to go up/out the selector?
current_selected_i=i-1;
found+=1;
break;
}
}
}.debounce(5));
function set_current_selected_text(s) {
selector_currnt_xpath_text[0].innerHTML=s;
}
function highlight_current_selected_i() {
if(state_clicked) {
state_clicked=false;
xctx.clearRect(0,0,c.width, c.height);
return;
}
var sel = selector_data['size_pos'][current_selected_i];
if (sel[0] == '/') {
// @todo - not sure just checking / is right
$("#css_filter").val('xpath:'+sel.xpath);
} else {
$("#css_filter").val(sel.xpath);
}
xctx.fillStyle = 'rgba(205,205,205,0.95)';
xctx.strokeStyle = 'rgba(225,0,0,0.9)';
xctx.lineWidth = 3;
xctx.fillRect(0,0,c.width, c.height);
// Clear out what only should be seen (make a clear/clean spot)
xctx.clearRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
xctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
state_clicked=true;
set_current_selected_text(sel.xpath);
}
$('#selector-canvas').bind('mousedown', function (e) {
highlight_current_selected_i();
});
}
});

View File

@@ -4,7 +4,6 @@ $(function () {
$(this).closest('.unviewed').removeClass('unviewed'); $(this).closest('.unviewed').removeClass('unviewed');
}); });
$('.with-share-link > *').click(function () { $('.with-share-link > *').click(function () {
$("#copied-clipboard").remove(); $("#copied-clipboard").remove();
@@ -21,6 +20,5 @@ $(function () {
$(this).remove(); $(this).remove();
}); });
}); });
}); });

View File

@@ -338,8 +338,7 @@ footer {
padding-top: 110px; } padding-top: 110px; }
div.tabs.collapsable ul li { div.tabs.collapsable ul li {
display: block; display: block;
border-radius: 0px; border-radius: 0px; }
margin-right: 0px; }
input[type='text'] { input[type='text'] {
width: 100%; } width: 100%; }
/* /*
@@ -430,15 +429,6 @@ and also iPads specifically.
.tab-pane-inner:target { .tab-pane-inner:target {
display: block; } display: block; }
#beta-logo {
height: 50px;
right: -3px;
top: -3px;
position: absolute; }
#selector-header {
padding-bottom: 1em; }
.edit-form { .edit-form {
min-width: 70%; min-width: 70%;
/* so it cant overflow */ /* so it cant overflow */
@@ -464,24 +454,6 @@ ul {
.time-check-widget tr input[type="number"] { .time-check-widget tr input[type="number"] {
width: 5em; } width: 5em; }
#selector-wrapper {
height: 600px;
overflow-y: scroll;
position: relative; }
#selector-wrapper > img {
position: absolute;
z-index: 4;
max-width: 100%; }
#selector-wrapper > canvas {
position: relative;
z-index: 5;
max-width: 100%; }
#selector-wrapper > canvas:hover {
cursor: pointer; }
#selector-current-xpath {
font-size: 80%; }
#webdriver-override-options input[type="number"] { #webdriver-override-options input[type="number"] {
width: 5em; } width: 5em; }

View File

@@ -469,7 +469,6 @@ footer {
div.tabs.collapsable ul li { div.tabs.collapsable ul li {
display: block; display: block;
border-radius: 0px; border-radius: 0px;
margin-right: 0px;
} }
input[type='text'] { input[type='text'] {
@@ -614,18 +613,6 @@ $form-edge-padding: 20px;
padding: 0px; padding: 0px;
} }
#beta-logo {
height: 50px;
// looks better when it's hanging off a little
right: -3px;
top: -3px;
position: absolute;
}
#selector-header {
padding-bottom: 1em;
}
.edit-form { .edit-form {
min-width: 70%; min-width: 70%;
/* so it cant overflow */ /* so it cant overflow */
@@ -662,30 +649,6 @@ ul {
} }
} }
#selector-wrapper {
height: 600px;
overflow-y: scroll;
position: relative;
//width: 100%;
> img {
position: absolute;
z-index: 4;
max-width: 100%;
}
>canvas {
position: relative;
z-index: 5;
max-width: 100%;
&:hover {
cursor: pointer;
}
}
}
#selector-current-xpath {
font-size: 80%;
}
#webdriver-override-options { #webdriver-override-options {
input[type="number"] { input[type="number"] {
width: 5em; width: 5em;

View File

@@ -40,7 +40,7 @@ class ChangeDetectionStore:
# Base definition for all watchers # Base definition for all watchers
# deepcopy part of #569 - not sure why its needed exactly # deepcopy part of #569 - not sure why its needed exactly
self.generic_definition = deepcopy(Watch.model(datastore_path = datastore_path, default={})) self.generic_definition = deepcopy(Watch.model())
if path.isfile('changedetectionio/source.txt'): if path.isfile('changedetectionio/source.txt'):
with open('changedetectionio/source.txt') as f: with open('changedetectionio/source.txt') as f:
@@ -71,10 +71,13 @@ class ChangeDetectionStore:
if 'application' in from_disk['settings']: if 'application' in from_disk['settings']:
self.__data['settings']['application'].update(from_disk['settings']['application']) self.__data['settings']['application'].update(from_disk['settings']['application'])
# Convert each existing watch back to the Watch.model object # Reinitialise each `watching` with our generic_definition in the case that we add a new var in the future.
# @todo pretty sure theres a python we todo this with an abstracted(?) object!
for uuid, watch in self.__data['watching'].items(): for uuid, watch in self.__data['watching'].items():
watch['uuid']=uuid _blank = deepcopy(self.generic_definition)
self.__data['watching'][uuid] = Watch.model(datastore_path=self.datastore_path, default=watch) _blank.update(watch)
self.__data['watching'].update({uuid: _blank})
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
print("Watching:", uuid, self.__data['watching'][uuid]['url']) print("Watching:", uuid, self.__data['watching'][uuid]['url'])
# First time ran, doesnt exist. # First time ran, doesnt exist.
@@ -84,7 +87,8 @@ class ChangeDetectionStore:
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test') self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news') self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
self.add_watch(url='https://changedetection.io/CHANGELOG.txt', tag='changedetection.io') self.add_watch(url='https://www.gov.uk/coronavirus', tag='Covid')
self.add_watch(url='https://changedetection.io/CHANGELOG.txt')
self.__data['version_tag'] = version_tag self.__data['version_tag'] = version_tag
@@ -127,8 +131,23 @@ class ChangeDetectionStore:
# Finally start the thread that will manage periodic data saves to JSON # Finally start the thread that will manage periodic data saves to JSON
save_data_thread = threading.Thread(target=self.save_datastore).start() save_data_thread = threading.Thread(target=self.save_datastore).start()
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
def get_newest_history_key(self, uuid):
if len(self.__data['watching'][uuid]['history']) == 1:
return 0
dates = list(self.__data['watching'][uuid]['history'].keys())
# Convert to int, sort and back to str again
# @todo replace datastore getter that does this automatically
dates = [int(i) for i in dates]
dates.sort(reverse=True)
if len(dates):
# always keyed as str
return str(dates[0])
return 0
def set_last_viewed(self, uuid, timestamp): def set_last_viewed(self, uuid, timestamp):
logging.debug("Setting watch UUID: {} last viewed to {}".format(uuid, int(timestamp)))
self.data['watching'][uuid].update({'last_viewed': int(timestamp)}) self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
self.needs_write = True self.needs_write = True
@@ -152,6 +171,7 @@ class ChangeDetectionStore:
del (update_obj[dict_key]) del (update_obj[dict_key])
self.__data['watching'][uuid].update(update_obj) self.__data['watching'][uuid].update(update_obj)
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
self.needs_write = True self.needs_write = True
@@ -166,20 +186,20 @@ class ChangeDetectionStore:
seconds += x * n seconds += x * n
return max(seconds, minimum_seconds_recheck_time) return max(seconds, minimum_seconds_recheck_time)
@property
def has_unviewed(self):
for uuid, watch in self.__data['watching'].items():
if watch.viewed == False:
return True
return False
@property @property
def data(self): def data(self):
has_unviewed = False has_unviewed = False
for uuid, watch in self.__data['watching'].items(): for uuid, v in self.__data['watching'].items():
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
if int(v['newest_history_key']) <= int(v['last_viewed']):
self.__data['watching'][uuid]['viewed'] = True
else:
self.__data['watching'][uuid]['viewed'] = False
has_unviewed = True
# #106 - Be sure this is None on empty string, False, None, etc # #106 - Be sure this is None on empty string, False, None, etc
# Default var for fetch_backend # Default var for fetch_backend
# @todo this may not be needed anymore, or could be easily removed
if not self.__data['watching'][uuid]['fetch_backend']: if not self.__data['watching'][uuid]['fetch_backend']:
self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend'] self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend']
@@ -188,6 +208,8 @@ class ChangeDetectionStore:
if not self.__data['settings']['application']['base_url']: if not self.__data['settings']['application']['base_url']:
self.__data['settings']['application']['base_url'] = env_base_url.strip('" ') self.__data['settings']['application']['base_url'] = env_base_url.strip('" ')
self.__data['has_unviewed'] = has_unviewed
return self.__data return self.__data
def get_all_tags(self): def get_all_tags(self):
@@ -218,11 +240,11 @@ class ChangeDetectionStore:
# GitHub #30 also delete history records # GitHub #30 also delete history records
for uuid in self.data['watching']: for uuid in self.data['watching']:
for path in self.data['watching'][uuid].history.values(): for path in self.data['watching'][uuid]['history'].values():
self.unlink_history_file(path) self.unlink_history_file(path)
else: else:
for path in self.data['watching'][uuid].history.values(): for path in self.data['watching'][uuid]['history'].values():
self.unlink_history_file(path) self.unlink_history_file(path)
del self.data['watching'][uuid] del self.data['watching'][uuid]
@@ -254,14 +276,13 @@ class ChangeDetectionStore:
def scrub_watch(self, uuid): def scrub_watch(self, uuid):
import pathlib import pathlib
self.__data['watching'][uuid].update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'previous_md5': False}) self.__data['watching'][uuid].update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'newest_history_key': 0, 'previous_md5': False})
self.needs_write_urgent = True self.needs_write_urgent = True
for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"): for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"):
unlink(item) unlink(item)
def add_watch(self, url, tag="", extras=None, write_to_disk_now=True): def add_watch(self, url, tag="", extras=None, write_to_disk_now=True):
if extras is None: if extras is None:
extras = {} extras = {}
# should always be str # should always be str
@@ -287,7 +308,7 @@ class ChangeDetectionStore:
'body', 'method', 'body', 'method',
'ignore_text', 'css_filter', 'ignore_text', 'css_filter',
'subtractive_selectors', 'trigger_text', 'subtractive_selectors', 'trigger_text',
'extract_title_as_title', 'extract_text']: 'extract_title_as_title']:
if res.get(k): if res.get(k):
apply_extras[k] = res[k] apply_extras[k] = res[k]
@@ -297,15 +318,16 @@ class ChangeDetectionStore:
return False return False
with self.lock: with self.lock:
# @todo use a common generic version of this
new_uuid = str(uuid_builder.uuid4())
# #Re 569 # #Re 569
new_watch = Watch.model(datastore_path=self.datastore_path, default={ # Not sure why deepcopy was needed here, sometimes new watches would appear to already have 'history' set
# I assumed this would instantiate a new object but somehow an existing dict was getting used
new_watch = deepcopy(Watch.model({
'url': url, 'url': url,
'tag': tag 'tag': tag
}) }))
new_uuid = new_watch['uuid']
logging.debug("Added URL {} - {}".format(url, new_uuid))
for k in ['uuid', 'history', 'last_checked', 'last_changed', 'newest_history_key', 'previous_md5', 'viewed']: for k in ['uuid', 'history', 'last_checked', 'last_changed', 'newest_history_key', 'previous_md5', 'viewed']:
if k in apply_extras: if k in apply_extras:
@@ -325,6 +347,23 @@ class ChangeDetectionStore:
self.sync_to_json() self.sync_to_json()
return new_uuid return new_uuid
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, watch_uuid, contents):
import uuid
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
# Incase the operator deleted it, check and create.
if not os.path.isdir(output_path):
mkdir(output_path)
fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
with open(fname, 'wb') as f:
f.write(contents)
f.close()
return fname
def get_screenshot(self, watch_uuid): def get_screenshot(self, watch_uuid):
output_path = "{}/{}".format(self.datastore_path, watch_uuid) output_path = "{}/{}".format(self.datastore_path, watch_uuid)
fname = "{}/last-screenshot.png".format(output_path) fname = "{}/last-screenshot.png".format(output_path)
@@ -333,15 +372,6 @@ class ChangeDetectionStore:
return False return False
def visualselector_data_is_ready(self, watch_uuid):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
screenshot_filename = "{}/last-screenshot.png".format(output_path)
elements_index_filename = "{}/elements.json".format(output_path)
if path.isfile(screenshot_filename) and path.isfile(elements_index_filename) :
return True
return False
# Save as PNG, PNG is larger but better for doing visual diff in the future # Save as PNG, PNG is larger but better for doing visual diff in the future
def save_screenshot(self, watch_uuid, screenshot: bytes): def save_screenshot(self, watch_uuid, screenshot: bytes):
output_path = "{}/{}".format(self.datastore_path, watch_uuid) output_path = "{}/{}".format(self.datastore_path, watch_uuid)
@@ -350,14 +380,6 @@ class ChangeDetectionStore:
f.write(screenshot) f.write(screenshot)
f.close() f.close()
def save_xpath_data(self, watch_uuid, data):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
fname = "{}/elements.json".format(output_path)
with open(fname, 'w') as f:
f.write(json.dumps(data))
f.close()
def sync_to_json(self): def sync_to_json(self):
logging.info("Saving JSON..") logging.info("Saving JSON..")
print("Saving JSON..") print("Saving JSON..")
@@ -410,8 +432,8 @@ class ChangeDetectionStore:
index=[] index=[]
for uuid in self.data['watching']: for uuid in self.data['watching']:
for id in self.data['watching'][uuid].history: for id in self.data['watching'][uuid]['history']:
index.append(self.data['watching'][uuid].history[str(id)]) index.append(self.data['watching'][uuid]['history'][str(id)])
import pathlib import pathlib
@@ -482,28 +504,3 @@ class ChangeDetectionStore:
# Only upgrade individual watch time if it was set # Only upgrade individual watch time if it was set
if watch.get('minutes_between_check', False): if watch.get('minutes_between_check', False):
self.data['watching'][uuid]['time_between_check']['minutes'] = watch['minutes_between_check'] self.data['watching'][uuid]['time_between_check']['minutes'] = watch['minutes_between_check']
# Move the history list to a flat text file index
# Better than SQLite because this list is only appended to, and works across NAS / NFS type setups
def update_2(self):
# @todo test running this on a newly updated one (when this already ran)
for uuid, watch in self.data['watching'].items():
history = []
if watch.get('history', False):
for d, p in watch['history'].items():
d = int(d) # Used to be keyed as str, we'll fix this now too
history.append("{},{}\n".format(d,p))
if len(history):
target_path = os.path.join(self.datastore_path, uuid)
if os.path.exists(target_path):
with open(os.path.join(target_path, "history.txt"), "w") as f:
f.writelines(history)
else:
logging.warning("Datastore history directory {} does not exist, skipping history import.".format(target_path))
# No longer needed, dynamically pulled from the disk when needed.
# But we should set it back to a empty dict so we don't break if this schema runs on an earlier version.
# In the distant future we can remove this entirely
self.data['watching'][uuid]['history'] = {}

View File

@@ -39,6 +39,9 @@
<div class="tabs"> <div class="tabs">
<ul> <ul>
<li class="tab" id="default-tab"><a href="#text">Text</a></li> <li class="tab" id="default-tab"><a href="#text">Text</a></li>
{% if screenshot %}
<li class="tab"><a href="#screenshot">Current screenshot</a></li>
{% endif %}
</ul> </ul>
</div> </div>
@@ -60,6 +63,17 @@
</table> </table>
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a> Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
</div> </div>
{% if screenshot %}
<div class="tab-pane-inner" id="screenshot">
<p>
<i>For now, only the most recent screenshot is saved and displayed.</i>
</p>
<img src="{{url_for('static_content', group='screenshot', filename=uuid)}}">
</div>
{% endif %}
</div> </div>

View File

@@ -5,18 +5,12 @@
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> <script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script> <script>
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}"; const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
const watch_visual_selector_data_url="{{url_for('static_content', group='visual_selector_data', filename=uuid)}}";
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
{% if emailprefix %} {% if emailprefix %}
const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}'); const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');
{% endif %} {% endif %}
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script> <script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script> <script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='visual-selector.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='limit.js')}}" defer></script>
<div class="edit-form monospaced-textarea"> <div class="edit-form monospaced-textarea">
@@ -24,7 +18,6 @@
<ul> <ul>
<li class="tab" id="default-tab"><a href="#general">General</a></li> <li class="tab" id="default-tab"><a href="#general">General</a></li>
<li class="tab"><a href="#request">Request</a></li> <li class="tab"><a href="#request">Request</a></li>
<li class="tab"><a id="visualselector-tab" href="#visualselector">Visual Selector</a></li>
<li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li> <li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li>
<li class="tab"><a href="#notifications">Notifications</a></li> <li class="tab"><a href="#notifications">Notifications</a></li>
</ul> </ul>
@@ -199,57 +192,6 @@ nav
</span> </span>
</div> </div>
</fieldset> </fieldset>
<fieldset>
<div class="pure-control-group">
{{ render_field(form.extract_text, rows=5, placeholder="\d+ online") }}
<span class="pure-form-message-inline">
<ul>
<li>Extracts text in the final output after other filters using regular expressions, for example <code>\d+ online</code></li>
<li>One line per regular-expression.</li>
</ul>
</span>
</div>
</fieldset>
</div>
<div class="tab-pane-inner visual-selector-ui" id="visualselector">
<img id="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}">
<fieldset>
<div class="pure-control-group">
{% if visualselector_enabled %}
{% if visualselector_data_is_ready %}
<div id="selector-header">
<a id="clear-selector" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Clear selection</a>
<i class="fetching-update-notice" style="font-size: 80%;">One moment, fetching screenshot and element information..</i>
</div>
<div id="selector-wrapper">
<!-- request the screenshot and get the element offset info ready -->
<!-- use img src ready load to know everything is ready to map out -->
<!-- @todo: maybe something interesting like a field to select 'elements that contain text... and their parents n' -->
<img id="selector-background" />
<canvas id="selector-canvas"></canvas>
</div>
<div id="selector-current-xpath" style="overflow-x: hidden"><strong>Currently:</strong>&nbsp;<span class="text">Loading...</span></div>
<span class="pure-form-message-inline">
<p><span style="font-weight: bold">Beta!</span> The Visual Selector is new and there may be minor bugs, please report pages that dont work, help us to improve this software!</p>
</span>
{% else %}
<span class="pure-form-message-inline">Screenshot and element data is not available or not yet ready.</span>
{% endif %}
{% else %}
<span class="pure-form-message-inline">
<p>Sorry, this functionality only works with Playwright/Chrome enabled watches.</p>
<p>Enable the Playwright Chrome fetcher, or alternatively try our <a href="https://lemonade.changedetection.io/start">very affordable subscription based service</a>.</p>
<p>This is because Selenium/WebDriver can not extract full page screenshots reliably.</p>
</span>
{% endif %}
</div>
</fieldset>
</div> </div>
<div id="actions"> <div id="actions">

View File

@@ -10,6 +10,9 @@
<div class="tabs"> <div class="tabs">
<ul> <ul>
<li class="tab" id="default-tab"><a href="#text">Text</a></li> <li class="tab" id="default-tab"><a href="#text">Text</a></li>
{% if screenshot %}
<li class="tab"><a href="#screenshot">Current screenshot</a></li>
{% endif %}
</ul> </ul>
</div> </div>
@@ -28,5 +31,15 @@
</tbody> </tbody>
</table> </table>
</div> </div>
{% if screenshot %}
<div class="tab-pane-inner" id="screenshot">
<p>
<i>For now, only the most recent screenshot is saved and displayed.</i>
</p>
<img src="{{url_for('static_content', group='screenshot', filename=uuid)}}">
</div>
{% endif %}
</div> </div>
{% endblock %} {% endblock %}

View File

@@ -3,7 +3,6 @@
{% from '_helpers.jinja' import render_simple_field %} {% from '_helpers.jinja' import render_simple_field %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script> <script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script> <script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
<div class="box"> <div class="box">
<form class="pure-form" action="{{ url_for('form_watch_add') }}" method="POST" id="new-watch-form"> <form class="pure-form" action="{{ url_for('form_watch_add') }}" method="POST" id="new-watch-form">
@@ -46,7 +45,7 @@
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %} {% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %} {% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %}
{% if watch.paused is defined and watch.paused != False %}paused{% endif %} {% if watch.paused is defined and watch.paused != False %}paused{% endif %}
{% if watch.newest_history_key| int > watch.last_viewed and watch.history_n>=2 %}unviewed{% endif %} {% if watch.newest_history_key| int > watch.last_viewed| int %}unviewed{% endif %}
{% if watch.uuid in queued_uuids %}queued{% endif %}"> {% if watch.uuid in queued_uuids %}queued{% endif %}">
<td class="inline">{{ loop.index }}</td> <td class="inline">{{ loop.index }}</td>
<td class="inline paused-state state-{{watch.paused}}"><a href="{{url_for('index', pause=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause" title="Pause"/></a></td> <td class="inline paused-state state-{{watch.paused}}"><a href="{{url_for('index', pause=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause" title="Pause"/></a></td>
@@ -68,7 +67,7 @@
{% endif %} {% endif %}
</td> </td>
<td class="last-checked">{{watch|format_last_checked_time}}</td> <td class="last-checked">{{watch|format_last_checked_time}}</td>
<td class="last-changed">{% if watch.history_n >=2 and watch.last_changed %} <td class="last-changed">{% if watch.history|length >= 2 and watch.last_changed %}
{{watch.last_changed|format_timestamp_timeago}} {{watch.last_changed|format_timestamp_timeago}}
{% else %} {% else %}
Not yet Not yet
@@ -78,10 +77,10 @@
<a {% if watch.uuid in queued_uuids %}disabled="true"{% endif %} href="{{ url_for('form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}" <a {% if watch.uuid in queued_uuids %}disabled="true"{% endif %} href="{{ url_for('form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}"
class="recheck pure-button button-small pure-button-primary">{% if watch.uuid in queued_uuids %}Queued{% else %}Recheck{% endif %}</a> class="recheck pure-button button-small pure-button-primary">{% if watch.uuid in queued_uuids %}Queued{% else %}Recheck{% endif %}</a>
<a href="{{ url_for('edit_page', uuid=watch.uuid)}}" class="pure-button button-small pure-button-primary">Edit</a> <a href="{{ url_for('edit_page', uuid=watch.uuid)}}" class="pure-button button-small pure-button-primary">Edit</a>
{% if watch.history_n >= 2 %} {% if watch.history|length >= 2 %}
<a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary diff-link">Diff</a> <a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary diff-link">Diff</a>
{% else %} {% else %}
{% if watch.history_n == 1 %} {% if watch.history|length == 1 %}
<a href="{{ url_for('preview_page', uuid=watch.uuid)}}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Preview</a> <a href="{{ url_for('preview_page', uuid=watch.uuid)}}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Preview</a>
{% endif %} {% endif %}
{% endif %} {% endif %}

View File

@@ -1,2 +0,0 @@
"""Tests for the app."""

View File

@@ -1,3 +0,0 @@
#!/usr/bin/python3
from .. import conftest

View File

@@ -1,48 +0,0 @@
#!/usr/bin/python3
import time
from flask import url_for
from ..util import live_server_setup
import logging
def test_fetch_webdriver_content(client, live_server):
live_server_setup(live_server)
#####################
res = client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_webdriver"},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": "https://changedetection.io/ci-test.html"},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
attempt = 0
while attempt < 20:
res = client.get(url_for("index"))
if not b'Checking now' in res.data:
break
logging.getLogger().info("Waiting for check to not say 'Checking now'..")
time.sleep(3)
attempt += 1
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
logging.getLogger().info("Looking for correct fetched HTML (text) from server")
assert b'cool it works' in res.data

View File

@@ -2,7 +2,7 @@
import time import time
from flask import url_for from flask import url_for
from .util import live_server_setup, extract_api_key_from_UI from .util import live_server_setup
import json import json
import uuid import uuid
@@ -53,10 +53,23 @@ def is_valid_uuid(val):
return False return False
# kinda funky, but works for now
def _extract_api_key_from_UI(client):
import re
res = client.get(
url_for("settings_page"),
)
# <span id="api-key">{{api_key}}</span>
m = re.search('<span id="api-key">(.+?)</span>', str(res.data))
api_key = m.group(1)
return api_key.strip()
def test_api_simple(client, live_server): def test_api_simple(client, live_server):
live_server_setup(live_server) live_server_setup(live_server)
api_key = extract_api_key_from_UI(client) api_key = _extract_api_key_from_UI(client)
# Create a watch # Create a watch
set_original_response() set_original_response()

View File

@@ -3,15 +3,14 @@
import time import time
from flask import url_for from flask import url_for
from urllib.request import urlopen from urllib.request import urlopen
from .util import set_original_response, set_modified_response, live_server_setup from . util import set_original_response, set_modified_response, live_server_setup
sleep_time_for_fetch_thread = 3 sleep_time_for_fetch_thread = 3
# Basic test to check inscriptus is not adding return line chars, basically works etc # Basic test to check inscriptus is not adding return line chars, basically works etc
def test_inscriptus(): def test_inscriptus():
from inscriptis import get_text from inscriptis import get_text
html_content = "<html><body>test!<br/>ok man</body></html>" html_content="<html><body>test!<br/>ok man</body></html>"
stripped_text_from_html = get_text(html_content) stripped_text_from_html = get_text(html_content)
assert stripped_text_from_html == 'test!\nok man' assert stripped_text_from_html == 'test!\nok man'
@@ -83,7 +82,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
# re #16 should have the diff in here too # re #16 should have the diff in here too
assert b'(into ) which has this one new line' in res.data assert b'(into ) which has this one new line' in res.data
assert b'CDATA' in res.data assert b'CDATA' in res.data
assert expected_url.encode('utf-8') in res.data assert expected_url.encode('utf-8') in res.data
# Following the 'diff' link, it should no longer display as 'unviewed' even after we recheck it a few times # Following the 'diff' link, it should no longer display as 'unviewed' even after we recheck it a few times
@@ -102,8 +101,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
# It should report nothing found (no new 'unviewed' class) # It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' not in res.data assert b'unviewed' not in res.data
assert b'Mark all viewed' not in res.data assert b'head title' not in res.data # Should not be present because this is off by default
assert b'head title' not in res.data # Should not be present because this is off by default
assert b'test-endpoint' in res.data assert b'test-endpoint' in res.data
set_original_response() set_original_response()
@@ -111,8 +109,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
# Enable auto pickup of <title> in settings # Enable auto pickup of <title> in settings
res = client.post( res = client.post(
url_for("settings_page"), url_for("settings_page"),
data={"application-extract_title_as_title": "1", "requests-time_between_check-minutes": 180, data={"application-extract_title_as_title": "1", "requests-time_between_check-minutes": 180, 'application-fetch_backend': "html_requests"},
'application-fetch_backend': "html_requests"},
follow_redirects=True follow_redirects=True
) )
@@ -121,18 +118,11 @@ def test_check_basic_change_detection_functionality(client, live_server):
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' in res.data assert b'unviewed' in res.data
assert b'Mark all viewed' in res.data
# It should have picked up the <title> # It should have picked up the <title>
assert b'head title' in res.data assert b'head title' in res.data
# hit the mark all viewed link
res = client.get(url_for("mark_all_viewed"), follow_redirects=True)
assert b'Mark all viewed' not in res.data
assert b'unviewed' not in res.data
# #
# Cleanup everything # Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True) res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data assert b'Deleted' in res.data

View File

@@ -150,8 +150,9 @@ def test_element_removal_full(client, live_server):
# Give the thread time to pick it up # Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
# so that we set the state to 'unviewed' after all the edits # No change yet - first check
client.get(url_for("diff_history_page", uuid="first")) res = client.get(url_for("index"))
assert b"unviewed" not in res.data
# Make a change to header/footer/nav # Make a change to header/footer/nav
set_modified_response() set_modified_response()

View File

@@ -1,127 +0,0 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup
from ..html_tools import *
def set_original_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that will change</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_modified_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>which has this one new line</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that did change ( 1000 online <br/> 80 guests)</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def test_check_filter_and_regex_extract(client, live_server):
sleep_time_for_fetch_thread = 3
live_server_setup(live_server)
css_filter = "#changetext"
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": css_filter,
'extract_text': '\d+ online\n\d+ guests',
"url": test_url,
"tag": "",
"headers": "",
'fetch_backend': "html_requests"
},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Check it saved
res = client.get(
url_for("edit_page", uuid="first"),
)
assert b'\d+ online' in res.data
# Trigger a check
# client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Make a change
set_modified_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should have 'unviewed' still
# Because it should be looking at only that 'sametext' id
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Check HTML conversion detected and workd
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Class will be blank for now because the frontend didnt apply the diff
assert b'<div class="">1000 online' in res.data
# Both regexs should be here
assert b'<div class="">80 guests' in res.data
# Should not be here
assert b'Some text that did change' not in res.data

View File

@@ -1,84 +0,0 @@
#!/usr/bin/python3
import time
import os
import json
import logging
from flask import url_for
from .util import live_server_setup
from urllib.parse import urlparse, parse_qs
def test_consistent_history(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
r = range(1, 50)
for one in r:
test_url = url_for('test_endpoint', content_type="text/html", content=str(one), _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
while True:
res = client.get(url_for("index"))
logging.debug("Waiting for 'Checking now' to go away..")
if b'Checking now' not in res.data:
break
time.sleep(0.5)
time.sleep(3)
# Essentially just triggers the DB write/update
res = client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Give it time to write it out
time.sleep(3)
json_db_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, 'url-watches.json')
json_obj = None
with open(json_db_file, 'r') as f:
json_obj = json.load(f)
# assert the right amount of watches was found in the JSON
assert len(json_obj['watching']) == len(r), "Correct number of watches was found in the JSON"
# each one should have a history.txt containing just one line
for w in json_obj['watching'].keys():
history_txt_index_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, 'history.txt')
assert os.path.isfile(history_txt_index_file), "History.txt should exist where I expect it - {}".format(history_txt_index_file)
# Same like in model.Watch
with open(history_txt_index_file, "r") as f:
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
assert len(tmp_history) == 1, "History.txt should contain 1 line"
# Should be two files,. the history.txt , and the snapshot.txt
files_in_watch_dir = os.listdir(os.path.join(live_server.app.config['DATASTORE'].datastore_path,
w))
# Find the snapshot one
for fname in files_in_watch_dir:
if fname != 'history.txt':
# contents should match what we requested as content returned from the test url
with open(os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, fname), 'r') as snapshot_f:
contents = snapshot_f.read()
watch_url = json_obj['watching'][w]['url']
u = urlparse(watch_url)
q = parse_qs(u[4])
assert q['content'][0] == contents.strip(), "Snapshot file {} should contain {}".format(fname, q['content'][0])
assert len(files_in_watch_dir) == 2, "Should be just two files in the dir, history.txt and the snapshot"

View File

@@ -43,7 +43,7 @@ def set_modified_with_trigger_text_response():
Some NEW nice initial text</br> Some NEW nice initial text</br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> </br>
Add to cart foobar123
<br/> <br/>
So let's see what happens. </br> So let's see what happens. </br>
</body> </body>
@@ -60,7 +60,7 @@ def test_trigger_functionality(client, live_server):
live_server_setup(live_server) live_server_setup(live_server)
sleep_time_for_fetch_thread = 3 sleep_time_for_fetch_thread = 3
trigger_text = "Add to cart" trigger_text = "foobar123"
set_original_ignore_response() set_original_ignore_response()
# Give the endpoint time to spin up # Give the endpoint time to spin up
@@ -78,6 +78,9 @@ def test_trigger_functionality(client, live_server):
# Trigger a check # Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True) client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text # Goto the edit page, add our ignore text
# Add our URL to the import page # Add our URL to the import page
res = client.post( res = client.post(
@@ -95,12 +98,6 @@ def test_trigger_functionality(client, live_server):
) )
assert bytes(trigger_text.encode('utf-8')) in res.data assert bytes(trigger_text.encode('utf-8')) in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# so that we set the state to 'unviewed' after all the edits
client.get(url_for("diff_history_page", uuid="first"))
# Trigger a check # Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True) client.get(url_for("form_watch_checknow"), follow_redirects=True)
@@ -124,7 +121,7 @@ def test_trigger_functionality(client, live_server):
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' not in res.data assert b'unviewed' not in res.data
# Now set the content which contains the trigger text # Just to be sure.. set a regular modified change..
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
set_modified_with_trigger_text_response() set_modified_with_trigger_text_response()
@@ -132,14 +129,8 @@ def test_trigger_functionality(client, live_server):
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' in res.data assert b'unviewed' in res.data
# https://github.com/dgtlmoon/changedetection.io/issues/616
# Apparently the actual snapshot that contains the trigger never shows
res = client.get(url_for("diff_history_page", uuid="first"))
assert b'Add to cart' in res.data
# Check the preview/highlighter, we should be able to see what we triggered on, but it should be highlighted # Check the preview/highlighter, we should be able to see what we triggered on, but it should be highlighted
res = client.get(url_for("preview_page", uuid="first")) res = client.get(url_for("preview_page", uuid="first"))
# We should be able to see what we ignored
# We should be able to see what we triggered on assert b'<div class="triggered">foobar' in res.data
assert b'<div class="triggered">Add to cart' in res.data

View File

@@ -42,6 +42,9 @@ def test_trigger_regex_functionality(client, live_server):
) )
assert b"1 Imported" in res.data assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up # Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
@@ -57,9 +60,7 @@ def test_trigger_regex_functionality(client, live_server):
"fetch_backend": "html_requests"}, "fetch_backend": "html_requests"},
follow_redirects=True follow_redirects=True
) )
time.sleep(sleep_time_for_fetch_thread)
# so that we set the state to 'unviewed' after all the edits
client.get(url_for("diff_history_page", uuid="first"))
with open("test-datastore/endpoint-content.txt", "w") as f: with open("test-datastore/endpoint-content.txt", "w") as f:
f.write("some new noise") f.write("some new noise")
@@ -77,8 +78,4 @@ def test_trigger_regex_functionality(client, live_server):
client.get(url_for("form_watch_checknow"), follow_redirects=True) client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' in res.data assert b'unviewed' in res.data
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -22,9 +22,10 @@ def set_original_ignore_response():
def test_trigger_regex_functionality_with_filter(client, live_server): def test_trigger_regex_functionality(client, live_server):
live_server_setup(live_server) live_server_setup(live_server)
sleep_time_for_fetch_thread = 3 sleep_time_for_fetch_thread = 3
set_original_ignore_response() set_original_ignore_response()
@@ -41,24 +42,26 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
) )
assert b"1 Imported" in res.data assert b"1 Imported" in res.data
# it needs time to save the original version # Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (just a new one shouldnt have anything)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
### test regex with filter ### test regex with filter
res = client.post( res = client.post(
url_for("edit_page", uuid="first"), url_for("edit_page", uuid="first"),
data={"trigger_text": "/cool.stuff/", data={"trigger_text": "/cool.stuff\d/",
"url": test_url, "url": test_url,
"css_filter": '#in-here', "css_filter": '#in-here',
"fetch_backend": "html_requests"}, "fetch_backend": "html_requests"},
follow_redirects=True follow_redirects=True
) )
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
client.get(url_for("diff_history_page", uuid="first"))
# Check that we have the expected text.. but it's not in the css filter we want # Check that we have the expected text.. but it's not in the css filter we want
with open("test-datastore/endpoint-content.txt", "w") as f: with open("test-datastore/endpoint-content.txt", "w") as f:
f.write("<html>some new noise with cool stuff2 ok</html>") f.write("<html>some new noise with cool stuff2 ok</html>")
@@ -70,7 +73,6 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' not in res.data assert b'unviewed' not in res.data
# now this should trigger something
with open("test-datastore/endpoint-content.txt", "w") as f: with open("test-datastore/endpoint-content.txt", "w") as f:
f.write("<html>some new noise with <span id=in-here>cool stuff6</span> ok</html>") f.write("<html>some new noise with <span id=in-here>cool stuff6</span> ok</html>")
@@ -79,6 +81,4 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' in res.data assert b'unviewed' in res.data
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -44,61 +44,6 @@ def set_modified_response():
return None return None
# Handle utf-8 charset replies https://github.com/dgtlmoon/changedetection.io/pull/613
def test_check_xpath_filter_utf8(client, live_server):
filter='//item/*[self::description]'
d='''<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
<channel>
<title>rpilocator.com</title>
<link>https://rpilocator.com</link>
<description>Find Raspberry Pi Computers in Stock</description>
<lastBuildDate>Thu, 19 May 2022 23:27:30 GMT</lastBuildDate>

<item>
<title>Stock Alert (UK): RPi CM4 - 1GB RAM, No MMC, No Wifi is In Stock at Pimoroni</title>
<description>Stock Alert (UK): RPi CM4 - 1GB RAM, No MMC, No Wifi is In Stock at Pimoroni</description>
<link>https://rpilocator.com?vendor=pimoroni&amp;utm_source=feed&amp;utm_medium=rss</link>
<category>pimoroni</category>
<category>UK</category>
<category>CM4</category>
<guid isPermaLink="false">F9FAB0D9-DF6F-40C8-8DEE5FC0646BB722</guid>
<pubDate>Thu, 19 May 2022 14:32:32 GMT</pubDate>
</item>
</channel>
</rss>'''
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(d)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True, content_type="application/rss+xml;charset=UTF-8")
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": filter, "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(3)
res = client.get(url_for("index"))
assert b'Unicode strings with encoding declaration are not supported.' not in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_check_markup_xpath_filter_restriction(client, live_server): def test_check_markup_xpath_filter_restriction(client, live_server):
sleep_time_for_fetch_thread = 3 sleep_time_for_fetch_thread = 3
@@ -150,8 +95,6 @@ def test_check_markup_xpath_filter_restriction(client, live_server):
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' not in res.data assert b'unviewed' not in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_xpath_validation(client, live_server): def test_xpath_validation(client, live_server):
@@ -174,8 +117,6 @@ def test_xpath_validation(client, live_server):
follow_redirects=True follow_redirects=True
) )
assert b"is not a valid XPath expression" in res.data assert b"is not a valid XPath expression" in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
# actually only really used by the distll.io importer, but could be handy too # actually only really used by the distll.io importer, but could be handy too
@@ -212,6 +153,8 @@ def test_check_with_prefix_css_filter(client, live_server):
follow_redirects=True follow_redirects=True
) )
with open('/tmp/fuck.html', 'wb') as f:
f.write(res.data)
assert b"Some text thats the same" in res.data #in selector assert b"Some text thats the same" in res.data #in selector
assert b"Some text that will change" not in res.data #not in selector assert b"Some text that will change" not in res.data #not in selector

View File

@@ -1,7 +1,6 @@
#!/usr/bin/python3 #!/usr/bin/python3
from flask import make_response, request from flask import make_response, request
from flask import url_for
def set_original_response(): def set_original_response():
test_return_data = """<html> test_return_data = """<html>
@@ -56,32 +55,14 @@ def set_more_modified_response():
return None return None
# kinda funky, but works for now
def extract_api_key_from_UI(client):
import re
res = client.get(
url_for("settings_page"),
)
# <span id="api-key">{{api_key}}</span>
m = re.search('<span id="api-key">(.+?)</span>', str(res.data))
api_key = m.group(1)
return api_key.strip()
def live_server_setup(live_server): def live_server_setup(live_server):
@live_server.app.route('/test-endpoint') @live_server.app.route('/test-endpoint')
def test_endpoint(): def test_endpoint():
ctype = request.args.get('content_type') ctype = request.args.get('content_type')
status_code = request.args.get('status_code') status_code = request.args.get('status_code')
content = request.args.get('content') or None
try: try:
if content is not None:
resp = make_response(content, status_code)
resp.headers['Content-Type'] = ctype if ctype else 'text/html'
return resp
# Tried using a global var here but didn't seem to work, so reading from a file instead. # Tried using a global var here but didn't seem to work, so reading from a file instead.
with open("test-datastore/endpoint-content.txt", "r") as f: with open("test-datastore/endpoint-content.txt", "r") as f:
resp = make_response(f.read(), status_code) resp = make_response(f.read(), status_code)

View File

@@ -40,11 +40,10 @@ class update_worker(threading.Thread):
contents = "" contents = ""
screenshot = False screenshot = False
update_obj= {} update_obj= {}
xpath_data = False
now = time.time() now = time.time()
try: try:
changed_detected, update_obj, contents, screenshot, xpath_data = update_handler.run(uuid) changed_detected, update_obj, contents, screenshot = update_handler.run(uuid)
# Re #342 # Re #342
# In Python 3, all strings are sequences of Unicode characters. There is a bytes type that holds raw bytes. # In Python 3, all strings are sequences of Unicode characters. There is a bytes type that holds raw bytes.
@@ -56,25 +55,12 @@ class update_worker(threading.Thread):
except content_fetcher.ReplyWithContentButNoText as e: except content_fetcher.ReplyWithContentButNoText as e:
# Totally fine, it's by choice - just continue on, nothing more to care about # Totally fine, it's by choice - just continue on, nothing more to care about
# Page had elements/content but no renderable text # Page had elements/content but no renderable text
if self.datastore.data['watching'][uuid].get('css_filter'):
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': "Got HTML content but no text found (CSS / xPath Filter not found in page?)"})
else:
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': "Got HTML content but no text found."})
pass pass
except content_fetcher.EmptyReply as e: except content_fetcher.EmptyReply as e:
# Some kind of custom to-str handler in the exception handler that does this? # Some kind of custom to-str handler in the exception handler that does this?
err_text = "EmptyReply: Status Code {}".format(e.status_code) err_text = "EmptyReply: Status Code {}".format(e.status_code)
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text, self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
'last_check_status': e.status_code}) 'last_check_status': e.status_code})
except content_fetcher.ScreenshotUnavailable as e:
err_text = "Screenshot unavailable, page did not render fully in the expected time"
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
'last_check_status': e.status_code})
except content_fetcher.PageUnloadable as e:
err_text = "Page request from server didnt respond correctly"
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
'last_check_status': e.status_code})
except Exception as e: except Exception as e:
self.app.logger.error("Exception reached processing watch UUID: %s - %s", uuid, str(e)) self.app.logger.error("Exception reached processing watch UUID: %s - %s", uuid, str(e))
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': str(e)}) self.datastore.update_watch(uuid=uuid, update_obj={'last_error': str(e)})
@@ -87,7 +73,9 @@ class update_worker(threading.Thread):
# For the FIRST time we check a site, or a change detected, save the snapshot. # For the FIRST time we check a site, or a change detected, save the snapshot.
if changed_detected or not watch['last_checked']: if changed_detected or not watch['last_checked']:
# A change was detected # A change was detected
fname = watch.save_history_text(contents=contents, timestamp=str(round(time.time()))) fname = self.datastore.save_history_text(watch_uuid=uuid, contents=contents)
# Should always be keyed by string(timestamp)
self.datastore.update_watch(uuid, {"history": {str(round(time.time())): fname}})
# Generally update anything interesting returned # Generally update anything interesting returned
self.datastore.update_watch(uuid=uuid, update_obj=update_obj) self.datastore.update_watch(uuid=uuid, update_obj=update_obj)
@@ -98,10 +86,16 @@ class update_worker(threading.Thread):
print (">> Change detected in UUID {} - {}".format(uuid, watch['url'])) print (">> Change detected in UUID {} - {}".format(uuid, watch['url']))
# Notifications should only trigger on the second time (first time, we gather the initial snapshot) # Notifications should only trigger on the second time (first time, we gather the initial snapshot)
if watch.history_n >= 2: if len(watch['history']) > 1:
dates = list(watch.history.keys()) dates = list(watch['history'].keys())
prev_fname = watch.history[dates[-2]] # Convert to int, sort and back to str again
# @todo replace datastore getter that does this automatically
dates = [int(i) for i in dates]
dates.sort(reverse=True)
dates = [str(i) for i in dates]
prev_fname = watch['history'][dates[1]]
# Did it have any notification alerts to hit? # Did it have any notification alerts to hit?
@@ -154,9 +148,6 @@ class update_worker(threading.Thread):
# Always save the screenshot if it's available # Always save the screenshot if it's available
if screenshot: if screenshot:
self.datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot) self.datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot)
if xpath_data:
self.datastore.save_xpath_data(watch_uuid=uuid, data=xpath_data)
self.current_uuid = None # Done self.current_uuid = None # Done
self.q.task_done() self.q.task_done()

Binary file not shown.

Before

Width:  |  Height:  |  Size: 20 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 238 KiB

View File

@@ -18,7 +18,7 @@ wtforms ~= 3.0
jsonpath-ng ~= 1.5.3 jsonpath-ng ~= 1.5.3
# Notification library # Notification library
apprise ~= 0.9.9 apprise ~= 0.9.8.3
# apprise mqtt https://github.com/dgtlmoon/changedetection.io/issues/315 # apprise mqtt https://github.com/dgtlmoon/changedetection.io/issues/315
paho-mqtt paho-mqtt

View File

Before

Width:  |  Height:  |  Size: 115 KiB

After

Width:  |  Height:  |  Size: 115 KiB

View File

Before

Width:  |  Height:  |  Size: 27 KiB

After

Width:  |  Height:  |  Size: 27 KiB

View File

Before

Width:  |  Height:  |  Size: 190 KiB

After

Width:  |  Height:  |  Size: 190 KiB