Compare commits
12 Commits
notificati
...
API-interf
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
fba4b6747e | ||
|
|
4cf2d9d7aa | ||
|
|
6bc1c681ec | ||
|
|
1267512858 | ||
|
|
886ef0c7c1 | ||
|
|
97c6db5e56 | ||
|
|
23dde28399 | ||
|
|
91fe2dd420 | ||
|
|
408c8878f3 | ||
|
|
37614224e5 | ||
|
|
1caff23d2c | ||
|
|
b01ee24d55 |
11
.github/ISSUE_TEMPLATE/bug_report.md
vendored
@@ -1,9 +1,9 @@
|
|||||||
---
|
---
|
||||||
name: Bug report
|
name: Bug report
|
||||||
about: Create a bug report, if you don't follow this template, your report will be DELETED
|
about: Create a report to help us improve
|
||||||
title: ''
|
title: ''
|
||||||
labels: 'triage'
|
labels: ''
|
||||||
assignees: 'dgtlmoon'
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|
||||||
@@ -11,18 +11,15 @@ assignees: 'dgtlmoon'
|
|||||||
A clear and concise description of what the bug is.
|
A clear and concise description of what the bug is.
|
||||||
|
|
||||||
**Version**
|
**Version**
|
||||||
*Exact version* in the top right area: 0....
|
In the top right area: 0....
|
||||||
|
|
||||||
**To Reproduce**
|
**To Reproduce**
|
||||||
|
|
||||||
Steps to reproduce the behavior:
|
Steps to reproduce the behavior:
|
||||||
1. Go to '...'
|
1. Go to '...'
|
||||||
2. Click on '....'
|
2. Click on '....'
|
||||||
3. Scroll down to '....'
|
3. Scroll down to '....'
|
||||||
4. See error
|
4. See error
|
||||||
|
|
||||||
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
|
|
||||||
|
|
||||||
**Expected behavior**
|
**Expected behavior**
|
||||||
A clear and concise description of what you expected to happen.
|
A clear and concise description of what you expected to happen.
|
||||||
|
|
||||||
|
|||||||
4
.github/ISSUE_TEMPLATE/feature_request.md
vendored
@@ -1,8 +1,8 @@
|
|||||||
---
|
---
|
||||||
name: Feature request
|
name: Feature request
|
||||||
about: Suggest an idea for this project
|
about: Suggest an idea for this project
|
||||||
title: '[feature]'
|
title: ''
|
||||||
labels: 'enhancement'
|
labels: ''
|
||||||
assignees: ''
|
assignees: ''
|
||||||
|
|
||||||
---
|
---
|
||||||
|
|||||||
1
.gitignore
vendored
@@ -8,6 +8,5 @@ __pycache__
|
|||||||
build
|
build
|
||||||
dist
|
dist
|
||||||
venv
|
venv
|
||||||
test-datastore
|
|
||||||
*.egg-info*
|
*.egg-info*
|
||||||
.vscode/settings.json
|
.vscode/settings.json
|
||||||
|
|||||||
@@ -1,4 +1,3 @@
|
|||||||
recursive-include changedetectionio/api *
|
|
||||||
recursive-include changedetectionio/templates *
|
recursive-include changedetectionio/templates *
|
||||||
recursive-include changedetectionio/static *
|
recursive-include changedetectionio/static *
|
||||||
recursive-include changedetectionio/model *
|
recursive-include changedetectionio/model *
|
||||||
|
|||||||
21
README.md
@@ -12,7 +12,7 @@ Live your data-life *pro-actively* instead of *re-actively*.
|
|||||||
Free, Open-source web page monitoring, notification and change detection. Don't have time? [**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start)
|
Free, Open-source web page monitoring, notification and change detection. Don't have time? [**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start)
|
||||||
|
|
||||||
|
|
||||||
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start)
|
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start)
|
||||||
|
|
||||||
|
|
||||||
**Get your own private instance now! Let us host it for you!**
|
**Get your own private instance now! Let us host it for you!**
|
||||||
@@ -48,19 +48,12 @@ _Need an actual Chrome runner with Javascript support? We support fetching via W
|
|||||||
|
|
||||||
## Screenshots
|
## Screenshots
|
||||||
|
|
||||||
### Examine differences in content.
|
Examining differences in content.
|
||||||
|
|
||||||
Easily see what changed, examine by word, line, or individual character.
|
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
|
|
||||||
|
|
||||||
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
|
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
|
||||||
|
|
||||||
### Target elements with the Visual Selector tool.
|
|
||||||
|
|
||||||
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (available also as part of our subscription service)
|
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
|
|
||||||
|
|
||||||
## Installation
|
## Installation
|
||||||
|
|
||||||
@@ -136,7 +129,7 @@ Just some examples
|
|||||||
|
|
||||||
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
|
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
|
||||||
|
|
||||||
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
|
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
|
||||||
|
|
||||||
Now you can also customise your notification content!
|
Now you can also customise your notification content!
|
||||||
|
|
||||||
@@ -144,11 +137,11 @@ Now you can also customise your notification content!
|
|||||||
|
|
||||||
Detect changes and monitor data in JSON API's by using the built-in JSONPath selectors as a filter / selector.
|
Detect changes and monitor data in JSON API's by using the built-in JSONPath selectors as a filter / selector.
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
|
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
|
||||||
|
|
||||||

|

|
||||||
|
|
||||||
### Parse JSON embedded in HTML!
|
### Parse JSON embedded in HTML!
|
||||||
|
|
||||||
@@ -184,7 +177,7 @@ Or directly donate an amount PayPal [:
|
|||||||
# Worker thread tells us which UUID it is currently processing.
|
# Worker thread tells us which UUID it is currently processing.
|
||||||
for t in running_update_threads:
|
for t in running_update_threads:
|
||||||
if t.current_uuid == watch_obj['uuid']:
|
if t.current_uuid == watch_obj['uuid']:
|
||||||
return '<span class="loader"></span><span> Checking now</span>'
|
return "Checking now.."
|
||||||
|
|
||||||
if watch_obj['last_checked'] == 0:
|
if watch_obj['last_checked'] == 0:
|
||||||
return 'Not yet'
|
return 'Not yet'
|
||||||
@@ -179,10 +178,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
global datastore
|
global datastore
|
||||||
datastore = datastore_o
|
datastore = datastore_o
|
||||||
|
|
||||||
# so far just for read-only via tests, but this will be moved eventually to be the main source
|
|
||||||
# (instead of the global var)
|
|
||||||
app.config['DATASTORE']=datastore_o
|
|
||||||
|
|
||||||
#app.config.update(config or {})
|
#app.config.update(config or {})
|
||||||
|
|
||||||
login_manager = flask_login.LoginManager(app)
|
login_manager = flask_login.LoginManager(app)
|
||||||
@@ -322,19 +317,25 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
for watch in sorted_watches:
|
for watch in sorted_watches:
|
||||||
|
|
||||||
dates = list(watch.history.keys())
|
dates = list(watch['history'].keys())
|
||||||
# Re #521 - Don't bother processing this one if theres less than 2 snapshots, means we never had a change detected.
|
# Re #521 - Don't bother processing this one if theres less than 2 snapshots, means we never had a change detected.
|
||||||
if len(dates) < 2:
|
if len(dates) < 2:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
prev_fname = watch.history[dates[-2]]
|
# Convert to int, sort and back to str again
|
||||||
|
# @todo replace datastore getter that does this automatically
|
||||||
|
dates = [int(i) for i in dates]
|
||||||
|
dates.sort(reverse=True)
|
||||||
|
dates = [str(i) for i in dates]
|
||||||
|
prev_fname = watch['history'][dates[1]]
|
||||||
|
|
||||||
if not watch.viewed:
|
if not watch['viewed']:
|
||||||
# Re #239 - GUID needs to be individual for each event
|
# Re #239 - GUID needs to be individual for each event
|
||||||
# @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228)
|
# @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228)
|
||||||
guid = "{}/{}".format(watch['uuid'], watch['last_changed'])
|
guid = "{}/{}".format(watch['uuid'], watch['last_changed'])
|
||||||
fe = fg.add_entry()
|
fe = fg.add_entry()
|
||||||
|
|
||||||
|
|
||||||
# Include a link to the diff page, they will have to login here to see if password protection is enabled.
|
# Include a link to the diff page, they will have to login here to see if password protection is enabled.
|
||||||
# Description is the page you watch, link takes you to the diff JS UI page
|
# Description is the page you watch, link takes you to the diff JS UI page
|
||||||
base_url = datastore.data['settings']['application']['base_url']
|
base_url = datastore.data['settings']['application']['base_url']
|
||||||
@@ -349,14 +350,13 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
watch_title = watch.get('title') if watch.get('title') else watch.get('url')
|
watch_title = watch.get('title') if watch.get('title') else watch.get('url')
|
||||||
fe.title(title=watch_title)
|
fe.title(title=watch_title)
|
||||||
latest_fname = watch.history[dates[-1]]
|
latest_fname = watch['history'][dates[0]]
|
||||||
|
|
||||||
html_diff = diff.render_diff(prev_fname, latest_fname, include_equal=False, line_feed_sep="</br>")
|
html_diff = diff.render_diff(prev_fname, latest_fname, include_equal=False, line_feed_sep="</br>")
|
||||||
fe.content(content="<html><body><h4>{}</h4>{}</body></html>".format(watch_title, html_diff),
|
fe.description(description="<![CDATA[<html><body><h4>{}</h4>{}</body></html>".format(watch_title, html_diff))
|
||||||
type='CDATA')
|
|
||||||
|
|
||||||
fe.guid(guid, permalink=False)
|
fe.guid(guid, permalink=False)
|
||||||
dt = datetime.datetime.fromtimestamp(int(watch.newest_history_key))
|
dt = datetime.datetime.fromtimestamp(int(watch['newest_history_key']))
|
||||||
dt = dt.replace(tzinfo=pytz.UTC)
|
dt = dt.replace(tzinfo=pytz.UTC)
|
||||||
fe.pubDate(dt)
|
fe.pubDate(dt)
|
||||||
|
|
||||||
@@ -415,13 +415,11 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
tags=existing_tags,
|
tags=existing_tags,
|
||||||
active_tag=limit_tag,
|
active_tag=limit_tag,
|
||||||
app_rss_token=datastore.data['settings']['application']['rss_access_token'],
|
app_rss_token=datastore.data['settings']['application']['rss_access_token'],
|
||||||
has_unviewed=datastore.has_unviewed,
|
has_unviewed=datastore.data['has_unviewed'],
|
||||||
# Don't link to hosting when we're on the hosting environment
|
# Don't link to hosting when we're on the hosting environment
|
||||||
hosted_sticky=os.getenv("SALTED_PASS", False) == False,
|
hosted_sticky=os.getenv("SALTED_PASS", False) == False,
|
||||||
guid=datastore.data['app_guid'],
|
guid=datastore.data['app_guid'],
|
||||||
queued_uuids=update_q.queue)
|
queued_uuids=update_q.queue)
|
||||||
|
|
||||||
|
|
||||||
if session.get('share-link'):
|
if session.get('share-link'):
|
||||||
del(session['share-link'])
|
del(session['share-link'])
|
||||||
return output
|
return output
|
||||||
@@ -458,19 +456,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
return 'OK'
|
return 'OK'
|
||||||
|
|
||||||
|
|
||||||
@app.route("/scrub/<string:uuid>", methods=['GET'])
|
|
||||||
@login_required
|
|
||||||
def scrub_watch(uuid):
|
|
||||||
try:
|
|
||||||
datastore.scrub_watch(uuid)
|
|
||||||
except KeyError:
|
|
||||||
flash('Watch not found', 'error')
|
|
||||||
else:
|
|
||||||
flash("Scrubbed watch {}".format(uuid))
|
|
||||||
|
|
||||||
return redirect(url_for('index'))
|
|
||||||
|
|
||||||
@app.route("/scrub", methods=['GET', 'POST'])
|
@app.route("/scrub", methods=['GET', 'POST'])
|
||||||
@login_required
|
@login_required
|
||||||
def scrub_page():
|
def scrub_page():
|
||||||
@@ -506,10 +491,10 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
# 0 means that theres only one, so that there should be no 'unviewed' history available
|
# 0 means that theres only one, so that there should be no 'unviewed' history available
|
||||||
if newest_history_key == 0:
|
if newest_history_key == 0:
|
||||||
newest_history_key = list(datastore.data['watching'][uuid].history.keys())[0]
|
newest_history_key = list(datastore.data['watching'][uuid]['history'].keys())[0]
|
||||||
|
|
||||||
if newest_history_key:
|
if newest_history_key:
|
||||||
with open(datastore.data['watching'][uuid].history[newest_history_key],
|
with open(datastore.data['watching'][uuid]['history'][newest_history_key],
|
||||||
encoding='utf-8') as file:
|
encoding='utf-8') as file:
|
||||||
raw_content = file.read()
|
raw_content = file.read()
|
||||||
|
|
||||||
@@ -603,12 +588,12 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
|
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
|
||||||
if form_ignore_text:
|
if form_ignore_text:
|
||||||
if len(datastore.data['watching'][uuid].history):
|
if len(datastore.data['watching'][uuid]['history']):
|
||||||
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
|
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
|
||||||
|
|
||||||
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
|
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
|
||||||
if form.css_filter.data.strip() != datastore.data['watching'][uuid]['css_filter']:
|
if form.css_filter.data.strip() != datastore.data['watching'][uuid]['css_filter']:
|
||||||
if len(datastore.data['watching'][uuid].history):
|
if len(datastore.data['watching'][uuid]['history']):
|
||||||
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
|
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
|
||||||
|
|
||||||
# Be sure proxy value is None
|
# Be sure proxy value is None
|
||||||
@@ -641,12 +626,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
if request.method == 'POST' and not form.validate():
|
if request.method == 'POST' and not form.validate():
|
||||||
flash("An error occurred, please see below.", "error")
|
flash("An error occurred, please see below.", "error")
|
||||||
|
|
||||||
visualselector_data_is_ready = datastore.visualselector_data_is_ready(uuid)
|
|
||||||
|
|
||||||
# Only works reliably with Playwright
|
|
||||||
visualselector_enabled = os.getenv('PLAYWRIGHT_DRIVER_URL', False) and default['fetch_backend'] == 'html_webdriver'
|
|
||||||
|
|
||||||
|
|
||||||
output = render_template("edit.html",
|
output = render_template("edit.html",
|
||||||
uuid=uuid,
|
uuid=uuid,
|
||||||
watch=datastore.data['watching'][uuid],
|
watch=datastore.data['watching'][uuid],
|
||||||
@@ -654,9 +633,7 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
has_empty_checktime=using_default_check_time,
|
has_empty_checktime=using_default_check_time,
|
||||||
using_global_webdriver_wait=default['webdriver_delay'] is None,
|
using_global_webdriver_wait=default['webdriver_delay'] is None,
|
||||||
current_base_url=datastore.data['settings']['application']['base_url'],
|
current_base_url=datastore.data['settings']['application']['base_url'],
|
||||||
emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False),
|
emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False)
|
||||||
visualselector_data_is_ready=visualselector_data_is_ready,
|
|
||||||
visualselector_enabled=visualselector_enabled
|
|
||||||
)
|
)
|
||||||
|
|
||||||
return output
|
return output
|
||||||
@@ -763,14 +740,15 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
return output
|
return output
|
||||||
|
|
||||||
# Clear all statuses, so we do not see the 'unviewed' class
|
# Clear all statuses, so we do not see the 'unviewed' class
|
||||||
@app.route("/form/mark-all-viewed", methods=['GET'])
|
@app.route("/api/mark-all-viewed", methods=['GET'])
|
||||||
@login_required
|
@login_required
|
||||||
def mark_all_viewed():
|
def mark_all_viewed():
|
||||||
|
|
||||||
# Save the current newest history as the most recently viewed
|
# Save the current newest history as the most recently viewed
|
||||||
for watch_uuid, watch in datastore.data['watching'].items():
|
for watch_uuid, watch in datastore.data['watching'].items():
|
||||||
datastore.set_last_viewed(watch_uuid, int(time.time()))
|
datastore.set_last_viewed(watch_uuid, watch['newest_history_key'])
|
||||||
|
|
||||||
|
flash("Cleared all statuses.")
|
||||||
return redirect(url_for('index'))
|
return redirect(url_for('index'))
|
||||||
|
|
||||||
@app.route("/diff/<string:uuid>", methods=['GET'])
|
@app.route("/diff/<string:uuid>", methods=['GET'])
|
||||||
@@ -788,17 +766,20 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
flash("No history found for the specified link, bad link?", "error")
|
flash("No history found for the specified link, bad link?", "error")
|
||||||
return redirect(url_for('index'))
|
return redirect(url_for('index'))
|
||||||
|
|
||||||
history = watch.history
|
dates = list(watch['history'].keys())
|
||||||
dates = list(history.keys())
|
# Convert to int, sort and back to str again
|
||||||
|
# @todo replace datastore getter that does this automatically
|
||||||
|
dates = [int(i) for i in dates]
|
||||||
|
dates.sort(reverse=True)
|
||||||
|
dates = [str(i) for i in dates]
|
||||||
|
|
||||||
if len(dates) < 2:
|
if len(dates) < 2:
|
||||||
flash("Not enough saved change detection snapshots to produce a report.", "error")
|
flash("Not enough saved change detection snapshots to produce a report.", "error")
|
||||||
return redirect(url_for('index'))
|
return redirect(url_for('index'))
|
||||||
|
|
||||||
# Save the current newest history as the most recently viewed
|
# Save the current newest history as the most recently viewed
|
||||||
datastore.set_last_viewed(uuid, time.time())
|
datastore.set_last_viewed(uuid, dates[0])
|
||||||
|
newest_file = watch['history'][dates[0]]
|
||||||
newest_file = history[dates[-1]]
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(newest_file, 'r') as f:
|
with open(newest_file, 'r') as f:
|
||||||
@@ -808,10 +789,10 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
previous_version = request.args.get('previous_version')
|
previous_version = request.args.get('previous_version')
|
||||||
try:
|
try:
|
||||||
previous_file = history[previous_version]
|
previous_file = watch['history'][previous_version]
|
||||||
except KeyError:
|
except KeyError:
|
||||||
# Not present, use a default value, the second one in the sorted list.
|
# Not present, use a default value, the second one in the sorted list.
|
||||||
previous_file = history[dates[-2]]
|
previous_file = watch['history'][dates[1]]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
with open(previous_file, 'r') as f:
|
with open(previous_file, 'r') as f:
|
||||||
@@ -822,25 +803,18 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
screenshot_url = datastore.get_screenshot(uuid)
|
screenshot_url = datastore.get_screenshot(uuid)
|
||||||
|
|
||||||
system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver'
|
output = render_template("diff.html", watch_a=watch,
|
||||||
|
|
||||||
is_html_webdriver = True if watch.get('fetch_backend') == 'html_webdriver' or (
|
|
||||||
watch.get('fetch_backend', None) is None and system_uses_webdriver) else False
|
|
||||||
|
|
||||||
output = render_template("diff.html",
|
|
||||||
watch_a=watch,
|
|
||||||
newest=newest_version_file_contents,
|
newest=newest_version_file_contents,
|
||||||
previous=previous_version_file_contents,
|
previous=previous_version_file_contents,
|
||||||
extra_stylesheets=extra_stylesheets,
|
extra_stylesheets=extra_stylesheets,
|
||||||
versions=dates[1:],
|
versions=dates[1:],
|
||||||
uuid=uuid,
|
uuid=uuid,
|
||||||
newest_version_timestamp=dates[-1],
|
newest_version_timestamp=dates[0],
|
||||||
current_previous_version=str(previous_version),
|
current_previous_version=str(previous_version),
|
||||||
current_diff_url=watch['url'],
|
current_diff_url=watch['url'],
|
||||||
extra_title=" - Diff - {}".format(watch['title'] if watch['title'] else watch['url']),
|
extra_title=" - Diff - {}".format(watch['title'] if watch['title'] else watch['url']),
|
||||||
left_sticky=True,
|
left_sticky=True,
|
||||||
screenshot=screenshot_url,
|
screenshot=screenshot_url)
|
||||||
is_html_webdriver=is_html_webdriver)
|
|
||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
@@ -855,12 +829,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
if uuid == 'first':
|
if uuid == 'first':
|
||||||
uuid = list(datastore.data['watching'].keys()).pop()
|
uuid = list(datastore.data['watching'].keys()).pop()
|
||||||
|
|
||||||
# Normally you would never reach this, because the 'preview' button is not available when there's no history
|
|
||||||
# However they may try to scrub and reload the page
|
|
||||||
if datastore.data['watching'][uuid].history_n == 0:
|
|
||||||
flash("Preview unavailable - No fetch/check completed or triggers not reached", "error")
|
|
||||||
return redirect(url_for('index'))
|
|
||||||
|
|
||||||
extra_stylesheets = [url_for('static_content', group='styles', filename='diff.css')]
|
extra_stylesheets = [url_for('static_content', group='styles', filename='diff.css')]
|
||||||
|
|
||||||
try:
|
try:
|
||||||
@@ -869,9 +837,9 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
flash("No history found for the specified link, bad link?", "error")
|
flash("No history found for the specified link, bad link?", "error")
|
||||||
return redirect(url_for('index'))
|
return redirect(url_for('index'))
|
||||||
|
|
||||||
if watch.history_n >0:
|
if len(watch['history']):
|
||||||
timestamps = sorted(watch.history.keys(), key=lambda x: int(x))
|
timestamps = sorted(watch['history'].keys(), key=lambda x: int(x))
|
||||||
filename = watch.history[timestamps[-1]]
|
filename = watch['history'][timestamps[-1]]
|
||||||
try:
|
try:
|
||||||
with open(filename, 'r') as f:
|
with open(filename, 'r') as f:
|
||||||
tmp = f.readlines()
|
tmp = f.readlines()
|
||||||
@@ -907,11 +875,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
content.append({'line': "No history found", 'classes': ''})
|
content.append({'line': "No history found", 'classes': ''})
|
||||||
|
|
||||||
screenshot_url = datastore.get_screenshot(uuid)
|
screenshot_url = datastore.get_screenshot(uuid)
|
||||||
system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver'
|
|
||||||
|
|
||||||
is_html_webdriver = True if watch.get('fetch_backend') == 'html_webdriver' or (
|
|
||||||
watch.get('fetch_backend', None) is None and system_uses_webdriver) else False
|
|
||||||
|
|
||||||
output = render_template("preview.html",
|
output = render_template("preview.html",
|
||||||
content=content,
|
content=content,
|
||||||
extra_stylesheets=extra_stylesheets,
|
extra_stylesheets=extra_stylesheets,
|
||||||
@@ -920,8 +883,7 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
current_diff_url=watch['url'],
|
current_diff_url=watch['url'],
|
||||||
screenshot=screenshot_url,
|
screenshot=screenshot_url,
|
||||||
watch=watch,
|
watch=watch,
|
||||||
uuid=uuid,
|
uuid=uuid)
|
||||||
is_html_webdriver=is_html_webdriver)
|
|
||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
@@ -930,7 +892,7 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
def notification_logs():
|
def notification_logs():
|
||||||
global notification_debug_log
|
global notification_debug_log
|
||||||
output = render_template("notification-log.html",
|
output = render_template("notification-log.html",
|
||||||
logs=notification_debug_log if len(notification_debug_log) else ["Notification logs are empty - no notifications sent yet."])
|
logs=notification_debug_log if len(notification_debug_log) else ["No errors or warnings detected"])
|
||||||
|
|
||||||
return output
|
return output
|
||||||
|
|
||||||
@@ -1014,9 +976,10 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
@app.route("/static/<string:group>/<string:filename>", methods=['GET'])
|
@app.route("/static/<string:group>/<string:filename>", methods=['GET'])
|
||||||
def static_content(group, filename):
|
def static_content(group, filename):
|
||||||
|
if group == 'screenshot':
|
||||||
|
|
||||||
from flask import make_response
|
from flask import make_response
|
||||||
|
|
||||||
if group == 'screenshot':
|
|
||||||
# Could be sensitive, follow password requirements
|
# Could be sensitive, follow password requirements
|
||||||
if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated:
|
if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated:
|
||||||
abort(403)
|
abort(403)
|
||||||
@@ -1035,26 +998,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
except FileNotFoundError:
|
except FileNotFoundError:
|
||||||
abort(404)
|
abort(404)
|
||||||
|
|
||||||
|
|
||||||
if group == 'visual_selector_data':
|
|
||||||
# Could be sensitive, follow password requirements
|
|
||||||
if datastore.data['settings']['application']['password'] and not flask_login.current_user.is_authenticated:
|
|
||||||
abort(403)
|
|
||||||
|
|
||||||
# These files should be in our subdirectory
|
|
||||||
try:
|
|
||||||
# set nocache, set content-type
|
|
||||||
watch_dir = datastore_o.datastore_path + "/" + filename
|
|
||||||
response = make_response(send_from_directory(filename="elements.json", directory=watch_dir, path=watch_dir + "/elements.json"))
|
|
||||||
response.headers['Content-type'] = 'application/json'
|
|
||||||
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
|
|
||||||
response.headers['Pragma'] = 'no-cache'
|
|
||||||
response.headers['Expires'] = 0
|
|
||||||
return response
|
|
||||||
|
|
||||||
except FileNotFoundError:
|
|
||||||
abort(404)
|
|
||||||
|
|
||||||
# These files should be in our subdirectory
|
# These files should be in our subdirectory
|
||||||
try:
|
try:
|
||||||
return send_from_directory("static/{}".format(group), path=filename)
|
return send_from_directory("static/{}".format(group), path=filename)
|
||||||
@@ -1171,7 +1114,6 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
# copy it to memory as trim off what we dont need (history)
|
# copy it to memory as trim off what we dont need (history)
|
||||||
watch = deepcopy(datastore.data['watching'][uuid])
|
watch = deepcopy(datastore.data['watching'][uuid])
|
||||||
# For older versions that are not a @property
|
|
||||||
if (watch.get('history')):
|
if (watch.get('history')):
|
||||||
del (watch['history'])
|
del (watch['history'])
|
||||||
|
|
||||||
@@ -1201,14 +1143,14 @@ def changedetection_app(config=None, datastore_o=None):
|
|||||||
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logging.error("Error sharing -{}".format(str(e)))
|
flash("Could not share, something went wrong while communicating with the share server.", 'error')
|
||||||
flash("Could not share, something went wrong while communicating with the share server - {}".format(str(e)), 'error')
|
|
||||||
|
|
||||||
# https://changedetection.io/share/VrMv05wpXyQa
|
# https://changedetection.io/share/VrMv05wpXyQa
|
||||||
# in the browser - should give you a nice info page - wtf
|
# in the browser - should give you a nice info page - wtf
|
||||||
# paste in etc
|
# paste in etc
|
||||||
return redirect(url_for('index'))
|
return redirect(url_for('index'))
|
||||||
|
|
||||||
|
|
||||||
# @todo handle ctrl break
|
# @todo handle ctrl break
|
||||||
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
|
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
|
||||||
|
|
||||||
@@ -1250,9 +1192,6 @@ def check_for_new_version():
|
|||||||
|
|
||||||
def notification_runner():
|
def notification_runner():
|
||||||
global notification_debug_log
|
global notification_debug_log
|
||||||
from datetime import datetime
|
|
||||||
import json
|
|
||||||
|
|
||||||
while not app.config.exit.is_set():
|
while not app.config.exit.is_set():
|
||||||
try:
|
try:
|
||||||
# At the moment only one thread runs (single runner)
|
# At the moment only one thread runs (single runner)
|
||||||
@@ -1261,16 +1200,13 @@ def notification_runner():
|
|||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
|
# Process notifications
|
||||||
now = datetime.now()
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
from changedetectionio import notification
|
from changedetectionio import notification
|
||||||
|
notification.process_notification(n_object, datastore)
|
||||||
sent_obj = notification.process_notification(n_object, datastore)
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
logging.error("Watch URL: {} Error {}".format(n_object['watch_url'], str(e)))
|
print("Watch URL: {} Error {}".format(n_object['watch_url'], str(e)))
|
||||||
|
|
||||||
# UUID wont be present when we submit a 'test' from the global settings
|
# UUID wont be present when we submit a 'test' from the global settings
|
||||||
if 'uuid' in n_object:
|
if 'uuid' in n_object:
|
||||||
@@ -1280,19 +1216,14 @@ def notification_runner():
|
|||||||
log_lines = str(e).splitlines()
|
log_lines = str(e).splitlines()
|
||||||
notification_debug_log += log_lines
|
notification_debug_log += log_lines
|
||||||
|
|
||||||
# Process notifications
|
|
||||||
notification_debug_log+= ["{} - SENDING - {}".format(now.strftime("%Y/%m/%d %H:%M:%S,000"), json.dumps(sent_obj))]
|
|
||||||
# Trim the log length
|
# Trim the log length
|
||||||
notification_debug_log = notification_debug_log[-100:]
|
notification_debug_log = notification_debug_log[-100:]
|
||||||
|
|
||||||
|
|
||||||
# Thread runner to check every minute, look for new watches to feed into the Queue.
|
# Thread runner to check every minute, look for new watches to feed into the Queue.
|
||||||
def ticker_thread_check_time_launch_checks():
|
def ticker_thread_check_time_launch_checks():
|
||||||
import random
|
|
||||||
from changedetectionio import update_worker
|
from changedetectionio import update_worker
|
||||||
|
|
||||||
recheck_time_minimum_seconds = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 20))
|
|
||||||
print("System env MINIMUM_SECONDS_RECHECK_TIME", recheck_time_minimum_seconds)
|
|
||||||
|
|
||||||
# Spin up Workers that do the fetching
|
# Spin up Workers that do the fetching
|
||||||
# Can be overriden by ENV or use the default settings
|
# Can be overriden by ENV or use the default settings
|
||||||
n_workers = int(os.getenv("FETCH_WORKERS", datastore.data['settings']['requests']['workers']))
|
n_workers = int(os.getenv("FETCH_WORKERS", datastore.data['settings']['requests']['workers']))
|
||||||
@@ -1310,10 +1241,9 @@ def ticker_thread_check_time_launch_checks():
|
|||||||
running_uuids.append(t.current_uuid)
|
running_uuids.append(t.current_uuid)
|
||||||
|
|
||||||
# Re #232 - Deepcopy the data incase it changes while we're iterating through it all
|
# Re #232 - Deepcopy the data incase it changes while we're iterating through it all
|
||||||
watch_uuid_list = []
|
|
||||||
while True:
|
while True:
|
||||||
try:
|
try:
|
||||||
watch_uuid_list = datastore.data['watching'].keys()
|
copied_datastore = deepcopy(datastore)
|
||||||
except RuntimeError as e:
|
except RuntimeError as e:
|
||||||
# RuntimeError: dictionary changed size during iteration
|
# RuntimeError: dictionary changed size during iteration
|
||||||
time.sleep(0.1)
|
time.sleep(0.1)
|
||||||
@@ -1324,49 +1254,33 @@ def ticker_thread_check_time_launch_checks():
|
|||||||
while update_q.qsize() >= 2000:
|
while update_q.qsize() >= 2000:
|
||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
|
|
||||||
|
|
||||||
recheck_time_system_seconds = int(datastore.threshold_seconds)
|
|
||||||
|
|
||||||
# Check for watches outside of the time threshold to put in the thread queue.
|
# Check for watches outside of the time threshold to put in the thread queue.
|
||||||
for uuid in watch_uuid_list:
|
|
||||||
now = time.time()
|
now = time.time()
|
||||||
watch = datastore.data['watching'].get(uuid)
|
|
||||||
if not watch:
|
recheck_time_minimum_seconds = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
|
||||||
logging.error("Watch: {} no longer present.".format(uuid))
|
recheck_time_system_seconds = datastore.threshold_seconds
|
||||||
continue
|
|
||||||
|
for uuid, watch in copied_datastore.data['watching'].items():
|
||||||
|
|
||||||
# No need todo further processing if it's paused
|
# No need todo further processing if it's paused
|
||||||
if watch['paused']:
|
if watch['paused']:
|
||||||
continue
|
continue
|
||||||
|
|
||||||
# If they supplied an individual entry minutes to threshold.
|
# If they supplied an individual entry minutes to threshold.
|
||||||
|
threshold = now
|
||||||
watch_threshold_seconds = watch.threshold_seconds()
|
watch_threshold_seconds = watch.threshold_seconds()
|
||||||
threshold = watch_threshold_seconds if watch_threshold_seconds > 0 else recheck_time_system_seconds
|
if watch_threshold_seconds:
|
||||||
|
threshold -= watch_threshold_seconds
|
||||||
|
else:
|
||||||
|
threshold -= recheck_time_system_seconds
|
||||||
|
|
||||||
# #580 - Jitter plus/minus amount of time to make the check seem more random to the server
|
# Yeah, put it in the queue, it's more than time
|
||||||
jitter = datastore.data['settings']['requests'].get('jitter_seconds', 0)
|
if watch['last_checked'] <= max(threshold, recheck_time_minimum_seconds):
|
||||||
if jitter > 0:
|
|
||||||
if watch.jitter_seconds == 0:
|
|
||||||
watch.jitter_seconds = random.uniform(-abs(jitter), jitter)
|
|
||||||
|
|
||||||
|
|
||||||
seconds_since_last_recheck = now - watch['last_checked']
|
|
||||||
if seconds_since_last_recheck >= (threshold + watch.jitter_seconds) and seconds_since_last_recheck >= recheck_time_minimum_seconds:
|
|
||||||
if not uuid in running_uuids and uuid not in update_q.queue:
|
if not uuid in running_uuids and uuid not in update_q.queue:
|
||||||
print("Queued watch UUID {} last checked at {} queued at {:0.2f} jitter {:0.2f}s, {:0.2f}s since last checked".format(uuid,
|
|
||||||
watch['last_checked'],
|
|
||||||
now,
|
|
||||||
watch.jitter_seconds,
|
|
||||||
now - watch['last_checked']))
|
|
||||||
# Into the queue with you
|
|
||||||
update_q.put(uuid)
|
update_q.put(uuid)
|
||||||
|
|
||||||
# Reset for next time
|
# Wait a few seconds before checking the list again
|
||||||
watch.jitter_seconds = 0
|
time.sleep(3)
|
||||||
|
|
||||||
# Wait before checking the list again - saves CPU
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# Should be low so we can break this out in testing
|
# Should be low so we can break this out in testing
|
||||||
app.config.exit.wait(1)
|
app.config.exit.wait(1)
|
||||||
@@ -28,7 +28,8 @@ class Watch(Resource):
|
|||||||
return "OK", 200
|
return "OK", 200
|
||||||
|
|
||||||
# Return without history, get that via another API call
|
# Return without history, get that via another API call
|
||||||
watch['history_n'] = watch.history_n
|
watch['history_n'] = len(watch['history'])
|
||||||
|
del (watch['history'])
|
||||||
return watch
|
return watch
|
||||||
|
|
||||||
@auth.check_token
|
@auth.check_token
|
||||||
@@ -51,7 +52,7 @@ class WatchHistory(Resource):
|
|||||||
watch = self.datastore.data['watching'].get(uuid)
|
watch = self.datastore.data['watching'].get(uuid)
|
||||||
if not watch:
|
if not watch:
|
||||||
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
|
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
|
||||||
return watch.history, 200
|
return watch['history'], 200
|
||||||
|
|
||||||
|
|
||||||
class WatchSingleHistory(Resource):
|
class WatchSingleHistory(Resource):
|
||||||
@@ -68,13 +69,13 @@ class WatchSingleHistory(Resource):
|
|||||||
if not watch:
|
if not watch:
|
||||||
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
|
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
|
||||||
|
|
||||||
if not len(watch.history):
|
if not len(watch['history']):
|
||||||
abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid))
|
abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid))
|
||||||
|
|
||||||
if timestamp == 'latest':
|
if timestamp == 'latest':
|
||||||
timestamp = list(watch.history.keys())[-1]
|
timestamp = list(watch['history'].keys())[-1]
|
||||||
|
|
||||||
with open(watch.history[timestamp], 'r') as f:
|
with open(watch['history'][timestamp], 'r') as f:
|
||||||
content = f.read()
|
content = f.read()
|
||||||
|
|
||||||
response = make_response(content, 200)
|
response = make_response(content, 200)
|
||||||
|
|||||||
@@ -1,19 +1,10 @@
|
|||||||
from abc import ABC, abstractmethod
|
from abc import ABC, abstractmethod
|
||||||
import chardet
|
import chardet
|
||||||
import json
|
|
||||||
import os
|
import os
|
||||||
import requests
|
import requests
|
||||||
import time
|
import time
|
||||||
import sys
|
import sys
|
||||||
|
|
||||||
class PageUnloadable(Exception):
|
|
||||||
def __init__(self, status_code, url):
|
|
||||||
# Set this so we can use it in other parts of the app
|
|
||||||
self.status_code = status_code
|
|
||||||
self.url = url
|
|
||||||
return
|
|
||||||
pass
|
|
||||||
|
|
||||||
class EmptyReply(Exception):
|
class EmptyReply(Exception):
|
||||||
def __init__(self, status_code, url):
|
def __init__(self, status_code, url):
|
||||||
# Set this so we can use it in other parts of the app
|
# Set this so we can use it in other parts of the app
|
||||||
@@ -22,14 +13,6 @@ class EmptyReply(Exception):
|
|||||||
return
|
return
|
||||||
pass
|
pass
|
||||||
|
|
||||||
class ScreenshotUnavailable(Exception):
|
|
||||||
def __init__(self, status_code, url):
|
|
||||||
# Set this so we can use it in other parts of the app
|
|
||||||
self.status_code = status_code
|
|
||||||
self.url = url
|
|
||||||
return
|
|
||||||
pass
|
|
||||||
|
|
||||||
class ReplyWithContentButNoText(Exception):
|
class ReplyWithContentButNoText(Exception):
|
||||||
def __init__(self, status_code, url):
|
def __init__(self, status_code, url):
|
||||||
# Set this so we can use it in other parts of the app
|
# Set this so we can use it in other parts of the app
|
||||||
@@ -44,135 +27,6 @@ class Fetcher():
|
|||||||
status_code = None
|
status_code = None
|
||||||
content = None
|
content = None
|
||||||
headers = None
|
headers = None
|
||||||
|
|
||||||
fetcher_description = "No description"
|
|
||||||
xpath_element_js = """
|
|
||||||
// Include the getXpath script directly, easier than fetching
|
|
||||||
!function(e,n){"object"==typeof exports&&"undefined"!=typeof module?module.exports=n():"function"==typeof define&&define.amd?define(n):(e=e||self).getXPath=n()}(this,function(){return function(e){var n=e;if(n&&n.id)return'//*[@id="'+n.id+'"]';for(var o=[];n&&Node.ELEMENT_NODE===n.nodeType;){for(var i=0,r=!1,d=n.previousSibling;d;)d.nodeType!==Node.DOCUMENT_TYPE_NODE&&d.nodeName===n.nodeName&&i++,d=d.previousSibling;for(d=n.nextSibling;d;){if(d.nodeName===n.nodeName){r=!0;break}d=d.nextSibling}o.push((n.prefix?n.prefix+":":"")+n.localName+(i||r?"["+(i+1)+"]":"")),n=n.parentNode}return o.length?"/"+o.reverse().join("/"):""}});
|
|
||||||
|
|
||||||
|
|
||||||
const findUpTag = (el) => {
|
|
||||||
let r = el
|
|
||||||
chained_css = [];
|
|
||||||
depth=0;
|
|
||||||
|
|
||||||
// Strategy 1: Keep going up until we hit an ID tag, imagine it's like #list-widget div h4
|
|
||||||
while (r.parentNode) {
|
|
||||||
if(depth==5) {
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
if('' !==r.id) {
|
|
||||||
chained_css.unshift("#"+r.id);
|
|
||||||
final_selector= chained_css.join('>');
|
|
||||||
// Be sure theres only one, some sites have multiples of the same ID tag :-(
|
|
||||||
if (window.document.querySelectorAll(final_selector).length ==1 ) {
|
|
||||||
return final_selector;
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
} else {
|
|
||||||
chained_css.unshift(r.tagName.toLowerCase());
|
|
||||||
}
|
|
||||||
r=r.parentNode;
|
|
||||||
depth+=1;
|
|
||||||
}
|
|
||||||
return null;
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// @todo - if it's SVG or IMG, go into image diff mode
|
|
||||||
var elements = window.document.querySelectorAll("div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary");
|
|
||||||
var size_pos=[];
|
|
||||||
// after page fetch, inject this JS
|
|
||||||
// build a map of all elements and their positions (maybe that only include text?)
|
|
||||||
var bbox;
|
|
||||||
for (var i = 0; i < elements.length; i++) {
|
|
||||||
bbox = elements[i].getBoundingClientRect();
|
|
||||||
|
|
||||||
// forget really small ones
|
|
||||||
if (bbox['width'] <20 && bbox['height'] < 20 ) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
// @todo the getXpath kind of sucks, it doesnt know when there is for example just one ID sometimes
|
|
||||||
// it should not traverse when we know we can anchor off just an ID one level up etc..
|
|
||||||
// maybe, get current class or id, keep traversing up looking for only class or id until there is just one match
|
|
||||||
|
|
||||||
// 1st primitive - if it has class, try joining it all and select, if theres only one.. well thats us.
|
|
||||||
xpath_result=false;
|
|
||||||
|
|
||||||
try {
|
|
||||||
var d= findUpTag(elements[i]);
|
|
||||||
if (d) {
|
|
||||||
xpath_result =d;
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
console.log(e);
|
|
||||||
}
|
|
||||||
|
|
||||||
// You could swap it and default to getXpath and then try the smarter one
|
|
||||||
// default back to the less intelligent one
|
|
||||||
if (!xpath_result) {
|
|
||||||
try {
|
|
||||||
// I've seen on FB and eBay that this doesnt work
|
|
||||||
// ReferenceError: getXPath is not defined at eval (eval at evaluate (:152:29), <anonymous>:67:20) at UtilityScript.evaluate (<anonymous>:159:18) at UtilityScript.<anonymous> (<anonymous>:1:44)
|
|
||||||
xpath_result = getXPath(elements[i]);
|
|
||||||
} catch (e) {
|
|
||||||
console.log(e);
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if(window.getComputedStyle(elements[i]).visibility === "hidden") {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
|
|
||||||
size_pos.push({
|
|
||||||
xpath: xpath_result,
|
|
||||||
width: Math.round(bbox['width']),
|
|
||||||
height: Math.round(bbox['height']),
|
|
||||||
left: Math.floor(bbox['left']),
|
|
||||||
top: Math.floor(bbox['top']),
|
|
||||||
childCount: elements[i].childElementCount
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
// inject the current one set in the css_filter, which may be a CSS rule
|
|
||||||
// used for displaying the current one in VisualSelector, where its not one we generated.
|
|
||||||
if (css_filter.length) {
|
|
||||||
q=false;
|
|
||||||
try {
|
|
||||||
// is it xpath?
|
|
||||||
if (css_filter.startsWith('/') || css_filter.startsWith('xpath:')) {
|
|
||||||
q=document.evaluate(css_filter.replace('xpath:',''), document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
|
|
||||||
} else {
|
|
||||||
q=document.querySelector(css_filter);
|
|
||||||
}
|
|
||||||
} catch (e) {
|
|
||||||
// Maybe catch DOMException and alert?
|
|
||||||
console.log(e);
|
|
||||||
}
|
|
||||||
bbox=false;
|
|
||||||
if(q) {
|
|
||||||
bbox = q.getBoundingClientRect();
|
|
||||||
}
|
|
||||||
|
|
||||||
if (bbox && bbox['width'] >0 && bbox['height']>0) {
|
|
||||||
size_pos.push({
|
|
||||||
xpath: css_filter,
|
|
||||||
width: bbox['width'],
|
|
||||||
height: bbox['height'],
|
|
||||||
left: bbox['left'],
|
|
||||||
top: bbox['top'],
|
|
||||||
childCount: q.childElementCount
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Window.width required for proper scaling in the frontend
|
|
||||||
return {'size_pos':size_pos, 'browser_width': window.innerWidth};
|
|
||||||
"""
|
|
||||||
xpath_data = None
|
|
||||||
|
|
||||||
# Will be needed in the future by the VisualSelector, always get this where possible.
|
# Will be needed in the future by the VisualSelector, always get this where possible.
|
||||||
screenshot = False
|
screenshot = False
|
||||||
fetcher_description = "No description"
|
fetcher_description = "No description"
|
||||||
@@ -193,8 +47,7 @@ class Fetcher():
|
|||||||
request_headers,
|
request_headers,
|
||||||
request_body,
|
request_body,
|
||||||
request_method,
|
request_method,
|
||||||
ignore_status_codes=False,
|
ignore_status_codes=False):
|
||||||
current_css_filter=None):
|
|
||||||
# Should set self.error, self.status_code and self.content
|
# Should set self.error, self.status_code and self.content
|
||||||
pass
|
pass
|
||||||
|
|
||||||
@@ -275,95 +128,52 @@ class base_html_playwright(Fetcher):
|
|||||||
request_headers,
|
request_headers,
|
||||||
request_body,
|
request_body,
|
||||||
request_method,
|
request_method,
|
||||||
ignore_status_codes=False,
|
ignore_status_codes=False):
|
||||||
current_css_filter=None):
|
|
||||||
|
|
||||||
from playwright.sync_api import sync_playwright
|
from playwright.sync_api import sync_playwright
|
||||||
import playwright._impl._api_types
|
import playwright._impl._api_types
|
||||||
from playwright._impl._api_types import Error, TimeoutError
|
from playwright._impl._api_types import Error, TimeoutError
|
||||||
response = None
|
|
||||||
with sync_playwright() as p:
|
with sync_playwright() as p:
|
||||||
browser_type = getattr(p, self.browser_type)
|
browser_type = getattr(p, self.browser_type)
|
||||||
|
|
||||||
# Seemed to cause a connection Exception even tho I can see it connect
|
# Seemed to cause a connection Exception even tho I can see it connect
|
||||||
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
|
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
|
||||||
# 60,000 connection timeout only
|
browser = browser_type.connect_over_cdp(self.command_executor, timeout=timeout * 1000)
|
||||||
browser = browser_type.connect_over_cdp(self.command_executor, timeout=60000)
|
|
||||||
|
|
||||||
# Set user agent to prevent Cloudflare from blocking the browser
|
# Set user agent to prevent Cloudflare from blocking the browser
|
||||||
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
|
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
|
||||||
context = browser.new_context(
|
context = browser.new_context(
|
||||||
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
|
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
|
||||||
proxy=self.proxy,
|
proxy=self.proxy
|
||||||
# This is needed to enable JavaScript execution on GitHub and others
|
|
||||||
bypass_csp=True,
|
|
||||||
# Should never be needed
|
|
||||||
accept_downloads=False
|
|
||||||
)
|
)
|
||||||
|
|
||||||
page = context.new_page()
|
page = context.new_page()
|
||||||
|
page.set_viewport_size({"width": 1280, "height": 1024})
|
||||||
try:
|
try:
|
||||||
page.set_default_navigation_timeout(90000)
|
response = page.goto(url, timeout=timeout * 1000, wait_until='commit')
|
||||||
page.set_default_timeout(90000)
|
# Wait_until = commit
|
||||||
|
# - `'commit'` - consider operation to be finished when network response is received and the document started loading.
|
||||||
# Bug - never set viewport size BEFORE page.goto
|
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
|
||||||
|
# This seemed to solve nearly all 'TimeoutErrors'
|
||||||
# Waits for the next navigation. Using Python context manager
|
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
|
||||||
# prevents a race condition between clicking and waiting for a navigation.
|
page.wait_for_timeout(extra_wait * 1000)
|
||||||
with page.expect_navigation():
|
|
||||||
response = page.goto(url, wait_until='load')
|
|
||||||
|
|
||||||
except playwright._impl._api_types.TimeoutError as e:
|
except playwright._impl._api_types.TimeoutError as e:
|
||||||
context.close()
|
raise EmptyReply(url=url, status_code=None)
|
||||||
browser.close()
|
|
||||||
# This can be ok, we will try to grab what we could retrieve
|
|
||||||
pass
|
|
||||||
except Exception as e:
|
|
||||||
print ("other exception when page.goto")
|
|
||||||
print (str(e))
|
|
||||||
context.close()
|
|
||||||
browser.close()
|
|
||||||
raise PageUnloadable(url=url, status_code=None)
|
|
||||||
|
|
||||||
if response is None:
|
if response is None:
|
||||||
context.close()
|
|
||||||
browser.close()
|
|
||||||
print ("response object was none")
|
|
||||||
raise EmptyReply(url=url, status_code=None)
|
raise EmptyReply(url=url, status_code=None)
|
||||||
|
|
||||||
# Bug 2(?) Set the viewport size AFTER loading the page
|
if len(page.content().strip()) == 0:
|
||||||
page.set_viewport_size({"width": 1280, "height": 1024})
|
raise EmptyReply(url=url, status_code=None)
|
||||||
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
|
|
||||||
time.sleep(extra_wait)
|
|
||||||
self.content = page.content()
|
|
||||||
self.status_code = response.status
|
self.status_code = response.status
|
||||||
|
self.content = page.content()
|
||||||
if len(self.content.strip()) == 0:
|
|
||||||
context.close()
|
|
||||||
browser.close()
|
|
||||||
print ("Content was empty")
|
|
||||||
raise EmptyReply(url=url, status_code=None)
|
|
||||||
|
|
||||||
self.headers = response.all_headers()
|
self.headers = response.all_headers()
|
||||||
|
|
||||||
if current_css_filter is not None:
|
|
||||||
page.evaluate("var css_filter={}".format(json.dumps(current_css_filter)))
|
|
||||||
else:
|
|
||||||
page.evaluate("var css_filter=''")
|
|
||||||
|
|
||||||
self.xpath_data = page.evaluate("async () => {" + self.xpath_element_js + "}")
|
|
||||||
|
|
||||||
# Bug 3 in Playwright screenshot handling
|
|
||||||
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
|
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
|
||||||
# JPEG is better here because the screenshots can be very very large
|
# JPEG is better here because the screenshots can be very very large
|
||||||
try:
|
|
||||||
page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024})
|
page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024})
|
||||||
self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=92)
|
self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=90)
|
||||||
except Exception as e:
|
|
||||||
context.close()
|
|
||||||
browser.close()
|
|
||||||
raise ScreenshotUnavailable(url=url, status_code=None)
|
|
||||||
|
|
||||||
context.close()
|
context.close()
|
||||||
browser.close()
|
browser.close()
|
||||||
|
|
||||||
@@ -415,8 +225,7 @@ class base_html_webdriver(Fetcher):
|
|||||||
request_headers,
|
request_headers,
|
||||||
request_body,
|
request_body,
|
||||||
request_method,
|
request_method,
|
||||||
ignore_status_codes=False,
|
ignore_status_codes=False):
|
||||||
current_css_filter=None):
|
|
||||||
|
|
||||||
from selenium import webdriver
|
from selenium import webdriver
|
||||||
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
|
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
|
||||||
@@ -436,10 +245,6 @@ class base_html_webdriver(Fetcher):
|
|||||||
self.quit()
|
self.quit()
|
||||||
raise
|
raise
|
||||||
|
|
||||||
self.driver.set_window_size(1280, 1024)
|
|
||||||
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
|
|
||||||
self.screenshot = self.driver.get_screenshot_as_png()
|
|
||||||
|
|
||||||
# @todo - how to check this? is it possible?
|
# @todo - how to check this? is it possible?
|
||||||
self.status_code = 200
|
self.status_code = 200
|
||||||
# @todo somehow we should try to get this working for WebDriver
|
# @todo somehow we should try to get this working for WebDriver
|
||||||
@@ -449,6 +254,8 @@ class base_html_webdriver(Fetcher):
|
|||||||
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
|
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
|
||||||
self.content = self.driver.page_source
|
self.content = self.driver.page_source
|
||||||
self.headers = {}
|
self.headers = {}
|
||||||
|
self.screenshot = self.driver.get_screenshot_as_png()
|
||||||
|
self.quit()
|
||||||
|
|
||||||
# Does the connection to the webdriver work? run a test connection.
|
# Does the connection to the webdriver work? run a test connection.
|
||||||
def is_ready(self):
|
def is_ready(self):
|
||||||
@@ -485,8 +292,7 @@ class html_requests(Fetcher):
|
|||||||
request_headers,
|
request_headers,
|
||||||
request_body,
|
request_body,
|
||||||
request_method,
|
request_method,
|
||||||
ignore_status_codes=False,
|
ignore_status_codes=False):
|
||||||
current_css_filter=None):
|
|
||||||
|
|
||||||
proxies={}
|
proxies={}
|
||||||
|
|
||||||
|
|||||||
@@ -94,7 +94,6 @@ class perform_site_check():
|
|||||||
# If the klass doesnt exist, just use a default
|
# If the klass doesnt exist, just use a default
|
||||||
klass = getattr(content_fetcher, "html_requests")
|
klass = getattr(content_fetcher, "html_requests")
|
||||||
|
|
||||||
|
|
||||||
proxy_args = self.set_proxy_from_list(watch)
|
proxy_args = self.set_proxy_from_list(watch)
|
||||||
fetcher = klass(proxy_override=proxy_args)
|
fetcher = klass(proxy_override=proxy_args)
|
||||||
|
|
||||||
@@ -105,8 +104,7 @@ class perform_site_check():
|
|||||||
elif system_webdriver_delay is not None:
|
elif system_webdriver_delay is not None:
|
||||||
fetcher.render_extract_delay = system_webdriver_delay
|
fetcher.render_extract_delay = system_webdriver_delay
|
||||||
|
|
||||||
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code, watch['css_filter'])
|
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code)
|
||||||
fetcher.quit()
|
|
||||||
|
|
||||||
# Fetching complete, now filters
|
# Fetching complete, now filters
|
||||||
# @todo move to class / maybe inside of fetcher abstract base?
|
# @todo move to class / maybe inside of fetcher abstract base?
|
||||||
@@ -204,20 +202,6 @@ class perform_site_check():
|
|||||||
else:
|
else:
|
||||||
stripped_text_from_html = stripped_text_from_html.encode('utf8')
|
stripped_text_from_html = stripped_text_from_html.encode('utf8')
|
||||||
|
|
||||||
# 615 Extract text by regex
|
|
||||||
extract_text = watch.get('extract_text', [])
|
|
||||||
if len(extract_text) > 0:
|
|
||||||
regex_matched_output = []
|
|
||||||
for s_re in extract_text:
|
|
||||||
result = re.findall(s_re.encode('utf8'), stripped_text_from_html,
|
|
||||||
flags=re.MULTILINE | re.DOTALL | re.LOCALE)
|
|
||||||
if result:
|
|
||||||
regex_matched_output.append(result[0])
|
|
||||||
|
|
||||||
if regex_matched_output:
|
|
||||||
stripped_text_from_html = b'\n'.join(regex_matched_output)
|
|
||||||
text_content_before_ignored_filter = stripped_text_from_html
|
|
||||||
|
|
||||||
# Re #133 - if we should strip whitespaces from triggering the change detected comparison
|
# Re #133 - if we should strip whitespaces from triggering the change detected comparison
|
||||||
if self.datastore.data['settings']['application'].get('ignore_whitespace', False):
|
if self.datastore.data['settings']['application'].get('ignore_whitespace', False):
|
||||||
fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest()
|
fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest()
|
||||||
@@ -235,11 +219,9 @@ class perform_site_check():
|
|||||||
# Yeah, lets block first until something matches
|
# Yeah, lets block first until something matches
|
||||||
blocked_by_not_found_trigger_text = True
|
blocked_by_not_found_trigger_text = True
|
||||||
# Filter and trigger works the same, so reuse it
|
# Filter and trigger works the same, so reuse it
|
||||||
# It should return the line numbers that match
|
|
||||||
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
|
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
|
||||||
wordlist=watch['trigger_text'],
|
wordlist=watch['trigger_text'],
|
||||||
mode="line numbers")
|
mode="line numbers")
|
||||||
# If it returned any lines that matched..
|
|
||||||
if result:
|
if result:
|
||||||
blocked_by_not_found_trigger_text = False
|
blocked_by_not_found_trigger_text = False
|
||||||
|
|
||||||
@@ -254,4 +236,4 @@ class perform_site_check():
|
|||||||
if not watch['title'] or not len(watch['title']):
|
if not watch['title'] or not len(watch['title']):
|
||||||
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
|
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
|
||||||
|
|
||||||
return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot, fetcher.xpath_data
|
return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot
|
||||||
|
|||||||
@@ -307,7 +307,7 @@ class ValidateCSSJSONXPATHInput(object):
|
|||||||
|
|
||||||
class quickWatchForm(Form):
|
class quickWatchForm(Form):
|
||||||
url = fields.URLField('URL', validators=[validateURL()])
|
url = fields.URLField('URL', validators=[validateURL()])
|
||||||
tag = StringField('Group tag', [validators.Optional()])
|
tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)])
|
||||||
|
|
||||||
# Common to a single watch and the global settings
|
# Common to a single watch and the global settings
|
||||||
class commonSettingsForm(Form):
|
class commonSettingsForm(Form):
|
||||||
@@ -323,16 +323,13 @@ class commonSettingsForm(Form):
|
|||||||
class watchForm(commonSettingsForm):
|
class watchForm(commonSettingsForm):
|
||||||
|
|
||||||
url = fields.URLField('URL', validators=[validateURL()])
|
url = fields.URLField('URL', validators=[validateURL()])
|
||||||
tag = StringField('Group tag', [validators.Optional()], default='')
|
tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)], default='')
|
||||||
|
|
||||||
time_between_check = FormField(TimeBetweenCheckForm)
|
time_between_check = FormField(TimeBetweenCheckForm)
|
||||||
|
|
||||||
css_filter = StringField('CSS/JSON/XPATH Filter', [ValidateCSSJSONXPATHInput()], default='')
|
css_filter = StringField('CSS/JSON/XPATH Filter', [ValidateCSSJSONXPATHInput()], default='')
|
||||||
|
|
||||||
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
|
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
|
||||||
|
|
||||||
extract_text = StringListField('Extract text', [ValidateListRegex()])
|
|
||||||
|
|
||||||
title = StringField('Title', default='')
|
title = StringField('Title', default='')
|
||||||
|
|
||||||
ignore_text = StringListField('Ignore text', [ValidateListRegex()])
|
ignore_text = StringListField('Ignore text', [ValidateListRegex()])
|
||||||
@@ -363,9 +360,7 @@ class watchForm(commonSettingsForm):
|
|||||||
class globalSettingsRequestForm(Form):
|
class globalSettingsRequestForm(Form):
|
||||||
time_between_check = FormField(TimeBetweenCheckForm)
|
time_between_check = FormField(TimeBetweenCheckForm)
|
||||||
proxy = RadioField('Proxy')
|
proxy = RadioField('Proxy')
|
||||||
jitter_seconds = IntegerField('Random jitter seconds ± check',
|
|
||||||
render_kw={"style": "width: 5em;"},
|
|
||||||
validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")])
|
|
||||||
|
|
||||||
# datastore.data['settings']['application']..
|
# datastore.data['settings']['application']..
|
||||||
class globalSettingsApplicationForm(commonSettingsForm):
|
class globalSettingsApplicationForm(commonSettingsForm):
|
||||||
|
|||||||
@@ -39,7 +39,7 @@ def element_removal(selectors: List[str], html_content):
|
|||||||
def xpath_filter(xpath_filter, html_content):
|
def xpath_filter(xpath_filter, html_content):
|
||||||
from lxml import etree, html
|
from lxml import etree, html
|
||||||
|
|
||||||
tree = html.fromstring(bytes(html_content, encoding='utf-8'))
|
tree = html.fromstring(html_content)
|
||||||
html_block = ""
|
html_block = ""
|
||||||
|
|
||||||
for item in tree.xpath(xpath_filter.strip(), namespaces={'re':'http://exslt.org/regular-expressions'}):
|
for item in tree.xpath(xpath_filter.strip(), namespaces={'re':'http://exslt.org/regular-expressions'}):
|
||||||
|
|||||||
@@ -92,7 +92,7 @@ class import_distill_io_json(Importer):
|
|||||||
|
|
||||||
for d in data.get('data'):
|
for d in data.get('data'):
|
||||||
d_config = json.loads(d['config'])
|
d_config = json.loads(d['config'])
|
||||||
extras = {'title': d.get('name', None)}
|
extras = {'title': d['name']}
|
||||||
|
|
||||||
if len(d['uri']) and good < 5000:
|
if len(d['uri']) and good < 5000:
|
||||||
try:
|
try:
|
||||||
@@ -114,9 +114,12 @@ class import_distill_io_json(Importer):
|
|||||||
except IndexError:
|
except IndexError:
|
||||||
pass
|
pass
|
||||||
|
|
||||||
|
try:
|
||||||
if d.get('tags', False):
|
|
||||||
extras['tag'] = ", ".join(d['tags'])
|
extras['tag'] = ", ".join(d['tags'])
|
||||||
|
except KeyError:
|
||||||
|
pass
|
||||||
|
except IndexError:
|
||||||
|
pass
|
||||||
|
|
||||||
new_uuid = datastore.add_watch(url=d['uri'].strip(),
|
new_uuid = datastore.add_watch(url=d['uri'].strip(),
|
||||||
extras=extras,
|
extras=extras,
|
||||||
|
|||||||
@@ -23,7 +23,6 @@ class model(dict):
|
|||||||
'requests': {
|
'requests': {
|
||||||
'timeout': 15, # Default 15 seconds
|
'timeout': 15, # Default 15 seconds
|
||||||
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
|
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
|
||||||
'jitter_seconds': 0,
|
|
||||||
'workers': 10, # Number of threads, lower is better for slow connections
|
'workers': 10, # Number of threads, lower is better for slow connections
|
||||||
'proxy': None # Preferred proxy connection
|
'proxy': None # Preferred proxy connection
|
||||||
},
|
},
|
||||||
@@ -36,7 +35,7 @@ class model(dict):
|
|||||||
'fetch_backend': os.getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
|
'fetch_backend': os.getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
|
||||||
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
|
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
|
||||||
'global_subtractive_selectors': [],
|
'global_subtractive_selectors': [],
|
||||||
'ignore_whitespace': True,
|
'ignore_whitespace': False,
|
||||||
'render_anchor_tag_content': False,
|
'render_anchor_tag_content': False,
|
||||||
'notification_urls': [], # Apprise URL list
|
'notification_urls': [], # Apprise URL list
|
||||||
# Custom notification content
|
# Custom notification content
|
||||||
|
|||||||
@@ -1,4 +1,5 @@
|
|||||||
import os
|
import os
|
||||||
|
|
||||||
import uuid as uuid_builder
|
import uuid as uuid_builder
|
||||||
|
|
||||||
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
|
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
|
||||||
@@ -11,31 +12,29 @@ from changedetectionio.notification import (
|
|||||||
|
|
||||||
|
|
||||||
class model(dict):
|
class model(dict):
|
||||||
__newest_history_key = None
|
base_config = {
|
||||||
__history_n=0
|
|
||||||
__base_config = {
|
|
||||||
'url': None,
|
'url': None,
|
||||||
'tag': None,
|
'tag': None,
|
||||||
'last_checked': 0,
|
'last_checked': 0,
|
||||||
'last_changed': 0,
|
'last_changed': 0,
|
||||||
'paused': False,
|
'paused': False,
|
||||||
'last_viewed': 0, # history key value of the last viewed via the [diff] link
|
'last_viewed': 0, # history key value of the last viewed via the [diff] link
|
||||||
#'newest_history_key': 0,
|
'newest_history_key': 0,
|
||||||
'title': None,
|
'title': None,
|
||||||
'previous_md5': False,
|
'previous_md5': False,
|
||||||
'uuid': str(uuid_builder.uuid4()),
|
# UUID not needed, should be generated only as a key
|
||||||
|
# 'uuid':
|
||||||
'headers': {}, # Extra headers to send
|
'headers': {}, # Extra headers to send
|
||||||
'body': None,
|
'body': None,
|
||||||
'method': 'GET',
|
'method': 'GET',
|
||||||
#'history': {}, # Dict of timestamp and output stripped filename
|
'history': {}, # Dict of timestamp and output stripped filename
|
||||||
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
|
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
|
||||||
# Custom notification content
|
# Custom notification content
|
||||||
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
|
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
|
||||||
'notification_title': default_notification_title,
|
'notification_title': default_notification_title,
|
||||||
'notification_body': default_notification_body,
|
'notification_body': default_notification_body,
|
||||||
'notification_format': default_notification_format,
|
'notification_format': default_notification_format,
|
||||||
'css_filter': '',
|
'css_filter': "",
|
||||||
'extract_text': [], # Extract text by regex after filters
|
|
||||||
'subtractive_selectors': [],
|
'subtractive_selectors': [],
|
||||||
'trigger_text': [], # List of text or regex to wait for until a change is detected
|
'trigger_text': [], # List of text or regex to wait for until a change is detected
|
||||||
'fetch_backend': None,
|
'fetch_backend': None,
|
||||||
@@ -47,106 +46,12 @@ class model(dict):
|
|||||||
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
|
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
|
||||||
'webdriver_delay': None
|
'webdriver_delay': None
|
||||||
}
|
}
|
||||||
jitter_seconds = 0
|
|
||||||
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
|
|
||||||
def __init__(self, *arg, **kw):
|
def __init__(self, *arg, **kw):
|
||||||
import uuid
|
self.update(self.base_config)
|
||||||
self.update(self.__base_config)
|
|
||||||
self.__datastore_path = kw['datastore_path']
|
|
||||||
|
|
||||||
self['uuid'] = str(uuid.uuid4())
|
|
||||||
|
|
||||||
del kw['datastore_path']
|
|
||||||
|
|
||||||
if kw.get('default'):
|
|
||||||
self.update(kw['default'])
|
|
||||||
del kw['default']
|
|
||||||
|
|
||||||
# goes at the end so we update the default object with the initialiser
|
# goes at the end so we update the default object with the initialiser
|
||||||
super(model, self).__init__(*arg, **kw)
|
super(model, self).__init__(*arg, **kw)
|
||||||
|
|
||||||
@property
|
|
||||||
def viewed(self):
|
|
||||||
if int(self['last_viewed']) >= int(self.newest_history_key) :
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
@property
|
|
||||||
def history_n(self):
|
|
||||||
return self.__history_n
|
|
||||||
|
|
||||||
@property
|
|
||||||
def history(self):
|
|
||||||
tmp_history = {}
|
|
||||||
import logging
|
|
||||||
import time
|
|
||||||
|
|
||||||
# Read the history file as a dict
|
|
||||||
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
|
|
||||||
if os.path.isfile(fname):
|
|
||||||
logging.debug("Disk IO accessed " + str(time.time()))
|
|
||||||
with open(fname, "r") as f:
|
|
||||||
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
|
|
||||||
|
|
||||||
if len(tmp_history):
|
|
||||||
self.__newest_history_key = list(tmp_history.keys())[-1]
|
|
||||||
|
|
||||||
self.__history_n = len(tmp_history)
|
|
||||||
|
|
||||||
return tmp_history
|
|
||||||
|
|
||||||
@property
|
|
||||||
def has_history(self):
|
|
||||||
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
|
|
||||||
return os.path.isfile(fname)
|
|
||||||
|
|
||||||
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
|
|
||||||
@property
|
|
||||||
def newest_history_key(self):
|
|
||||||
if self.__newest_history_key is not None:
|
|
||||||
return self.__newest_history_key
|
|
||||||
|
|
||||||
if len(self.history) <= 1:
|
|
||||||
return 0
|
|
||||||
|
|
||||||
|
|
||||||
bump = self.history
|
|
||||||
return self.__newest_history_key
|
|
||||||
|
|
||||||
|
|
||||||
# Save some text file to the appropriate path and bump the history
|
|
||||||
# result_obj from fetch_site_status.run()
|
|
||||||
def save_history_text(self, contents, timestamp):
|
|
||||||
import uuid
|
|
||||||
from os import mkdir, path, unlink
|
|
||||||
import logging
|
|
||||||
|
|
||||||
output_path = "{}/{}".format(self.__datastore_path, self['uuid'])
|
|
||||||
|
|
||||||
# Incase the operator deleted it, check and create.
|
|
||||||
if not os.path.isdir(output_path):
|
|
||||||
mkdir(output_path)
|
|
||||||
|
|
||||||
snapshot_fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
|
|
||||||
logging.debug("Saving history text {}".format(snapshot_fname))
|
|
||||||
|
|
||||||
with open(snapshot_fname, 'wb') as f:
|
|
||||||
f.write(contents)
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
# Append to index
|
|
||||||
# @todo check last char was \n
|
|
||||||
index_fname = "{}/history.txt".format(output_path)
|
|
||||||
with open(index_fname, 'a') as f:
|
|
||||||
f.write("{},{}\n".format(timestamp, snapshot_fname))
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
self.__newest_history_key = timestamp
|
|
||||||
self.__history_n+=1
|
|
||||||
|
|
||||||
#@todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
|
|
||||||
return snapshot_fname
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def has_empty_checktime(self):
|
def has_empty_checktime(self):
|
||||||
@@ -157,7 +62,8 @@ class model(dict):
|
|||||||
|
|
||||||
def threshold_seconds(self):
|
def threshold_seconds(self):
|
||||||
seconds = 0
|
seconds = 0
|
||||||
for m, n in self.mtable.items():
|
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
|
||||||
|
for m, n in mtable.items():
|
||||||
x = self.get('time_between_check', {}).get(m, None)
|
x = self.get('time_between_check', {}).get(m, None)
|
||||||
if x:
|
if x:
|
||||||
seconds += x * n
|
seconds += x * n
|
||||||
|
|||||||
@@ -67,11 +67,6 @@ def process_notification(n_object, datastore):
|
|||||||
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
|
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
|
||||||
|
|
||||||
if url.startswith('tgram://'):
|
if url.startswith('tgram://'):
|
||||||
# Telegram only supports a limit subset of HTML, remove the '<br/>' we place in.
|
|
||||||
# re https://github.com/dgtlmoon/changedetection.io/issues/555
|
|
||||||
# @todo re-use an existing library we have already imported to strip all non-allowed tags
|
|
||||||
n_body = n_body.replace('<br/>', '\n')
|
|
||||||
n_body = n_body.replace('</br>', '\n')
|
|
||||||
# real limit is 4096, but minus some for extra metadata
|
# real limit is 4096, but minus some for extra metadata
|
||||||
payload_max_size = 3600
|
payload_max_size = 3600
|
||||||
body_limit = max(0, payload_max_size - len(n_title))
|
body_limit = max(0, payload_max_size - len(n_title))
|
||||||
@@ -102,12 +97,6 @@ def process_notification(n_object, datastore):
|
|||||||
if log_value and 'WARNING' in log_value or 'ERROR' in log_value:
|
if log_value and 'WARNING' in log_value or 'ERROR' in log_value:
|
||||||
raise Exception(log_value)
|
raise Exception(log_value)
|
||||||
|
|
||||||
# Return what was sent for better logging
|
|
||||||
return {'title': n_title,
|
|
||||||
'body': n_body,
|
|
||||||
'body_format': n_format}
|
|
||||||
|
|
||||||
|
|
||||||
# Notification title + body content parameters get created here.
|
# Notification title + body content parameters get created here.
|
||||||
def create_notification_parameters(n_object, datastore):
|
def create_notification_parameters(n_object, datastore):
|
||||||
from copy import deepcopy
|
from copy import deepcopy
|
||||||
|
|||||||
@@ -9,8 +9,6 @@
|
|||||||
# exit when any command fails
|
# exit when any command fails
|
||||||
set -e
|
set -e
|
||||||
|
|
||||||
export MINIMUM_SECONDS_RECHECK_TIME=0
|
|
||||||
|
|
||||||
find tests/test_*py -type f|while read test_name
|
find tests/test_*py -type f|while read test_name
|
||||||
do
|
do
|
||||||
echo "TEST RUNNING $test_name"
|
echo "TEST RUNNING $test_name"
|
||||||
@@ -24,26 +22,3 @@ echo "RUNNING WITH BASE_URL SET"
|
|||||||
export BASE_URL="https://really-unique-domain.io"
|
export BASE_URL="https://really-unique-domain.io"
|
||||||
pytest tests/test_notification.py
|
pytest tests/test_notification.py
|
||||||
|
|
||||||
|
|
||||||
# Now for the selenium and playwright/browserless fetchers
|
|
||||||
# Note - this is not UI functional tests - just checking that each one can fetch the content
|
|
||||||
|
|
||||||
echo "TESTING WEBDRIVER FETCH > SELENIUM/WEBDRIVER..."
|
|
||||||
docker run -d --name $$-test_selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
|
|
||||||
# takes a while to spin up
|
|
||||||
sleep 5
|
|
||||||
export WEBDRIVER_URL=http://localhost:4444/wd/hub
|
|
||||||
pytest tests/fetchers/test_content.py
|
|
||||||
unset WEBDRIVER_URL
|
|
||||||
docker kill $$-test_selenium
|
|
||||||
|
|
||||||
echo "TESTING WEBDRIVER FETCH > PLAYWRIGHT/BROWSERLESS..."
|
|
||||||
# Not all platforms support playwright (not ARM/rPI), so it's not packaged in requirements.txt
|
|
||||||
pip3 install playwright~=1.22
|
|
||||||
docker run -d --name $$-test_browserless -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
|
|
||||||
# takes a while to spin up
|
|
||||||
sleep 5
|
|
||||||
export PLAYWRIGHT_DRIVER_URL=ws://127.0.0.1:3000
|
|
||||||
pytest tests/fetchers/test_content.py
|
|
||||||
unset PLAYWRIGHT_DRIVER_URL
|
|
||||||
docker kill $$-test_browserless
|
|
||||||
|
Before Width: | Height: | Size: 6.2 KiB |
|
Before Width: | Height: | Size: 12 KiB |
@@ -1,17 +0,0 @@
|
|||||||
$(document).ready(function () {
|
|
||||||
// Load it when the #screenshot tab is in use, so we dont give a slow experience when waiting for the text diff to load
|
|
||||||
window.addEventListener('hashchange', function (e) {
|
|
||||||
toggle(location.hash);
|
|
||||||
}, false);
|
|
||||||
|
|
||||||
toggle(location.hash);
|
|
||||||
|
|
||||||
function toggle(hash_name) {
|
|
||||||
if (hash_name === '#screenshot') {
|
|
||||||
$("img#screenshot-img").attr('src', screenshot_url);
|
|
||||||
$("#settings").hide();
|
|
||||||
} else {
|
|
||||||
$("#settings").show();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
@@ -1,56 +0,0 @@
|
|||||||
/**
|
|
||||||
* debounce
|
|
||||||
* @param {integer} milliseconds This param indicates the number of milliseconds
|
|
||||||
* to wait after the last call before calling the original function.
|
|
||||||
* @param {object} What "this" refers to in the returned function.
|
|
||||||
* @return {function} This returns a function that when called will wait the
|
|
||||||
* indicated number of milliseconds after the last call before
|
|
||||||
* calling the original function.
|
|
||||||
*/
|
|
||||||
Function.prototype.debounce = function (milliseconds, context) {
|
|
||||||
var baseFunction = this,
|
|
||||||
timer = null,
|
|
||||||
wait = milliseconds;
|
|
||||||
|
|
||||||
return function () {
|
|
||||||
var self = context || this,
|
|
||||||
args = arguments;
|
|
||||||
|
|
||||||
function complete() {
|
|
||||||
baseFunction.apply(self, args);
|
|
||||||
timer = null;
|
|
||||||
}
|
|
||||||
|
|
||||||
if (timer) {
|
|
||||||
clearTimeout(timer);
|
|
||||||
}
|
|
||||||
|
|
||||||
timer = setTimeout(complete, wait);
|
|
||||||
};
|
|
||||||
};
|
|
||||||
|
|
||||||
/**
|
|
||||||
* throttle
|
|
||||||
* @param {integer} milliseconds This param indicates the number of milliseconds
|
|
||||||
* to wait between calls before calling the original function.
|
|
||||||
* @param {object} What "this" refers to in the returned function.
|
|
||||||
* @return {function} This returns a function that when called will wait the
|
|
||||||
* indicated number of milliseconds between calls before
|
|
||||||
* calling the original function.
|
|
||||||
*/
|
|
||||||
Function.prototype.throttle = function (milliseconds, context) {
|
|
||||||
var baseFunction = this,
|
|
||||||
lastEventTimestamp = null,
|
|
||||||
limit = milliseconds;
|
|
||||||
|
|
||||||
return function () {
|
|
||||||
var self = context || this,
|
|
||||||
args = arguments,
|
|
||||||
now = Date.now();
|
|
||||||
|
|
||||||
if (!lastEventTimestamp || now - lastEventTimestamp >= limit) {
|
|
||||||
lastEventTimestamp = now;
|
|
||||||
baseFunction.apply(self, args);
|
|
||||||
}
|
|
||||||
};
|
|
||||||
};
|
|
||||||
@@ -40,19 +40,13 @@ $(document).ready(function() {
|
|||||||
$.ajax({
|
$.ajax({
|
||||||
type: "POST",
|
type: "POST",
|
||||||
url: notification_base_url,
|
url: notification_base_url,
|
||||||
data : data,
|
data : data
|
||||||
statusCode: {
|
|
||||||
400: function() {
|
|
||||||
// More than likely the CSRF token was lost when the server restarted
|
|
||||||
alert("There was a problem processing the request, please reload the page.");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}).done(function(data){
|
}).done(function(data){
|
||||||
console.log(data);
|
console.log(data);
|
||||||
alert('Sent');
|
alert('Sent');
|
||||||
}).fail(function(data){
|
}).fail(function(data){
|
||||||
console.log(data);
|
console.log(data);
|
||||||
alert('There was an error communicating with the server.');
|
alert('Error: '+data.responseJSON.error);
|
||||||
})
|
})
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|||||||
@@ -1,230 +0,0 @@
|
|||||||
// Horrible proof of concept code :)
|
|
||||||
// yes - this is really a hack, if you are a front-ender and want to help, please get in touch!
|
|
||||||
|
|
||||||
$(document).ready(function() {
|
|
||||||
|
|
||||||
var current_selected_i;
|
|
||||||
var state_clicked=false;
|
|
||||||
|
|
||||||
var c;
|
|
||||||
|
|
||||||
// greyed out fill context
|
|
||||||
var xctx;
|
|
||||||
// redline highlight context
|
|
||||||
var ctx;
|
|
||||||
|
|
||||||
var current_default_xpath;
|
|
||||||
var x_scale=1;
|
|
||||||
var y_scale=1;
|
|
||||||
var selector_image;
|
|
||||||
var selector_image_rect;
|
|
||||||
var selector_data;
|
|
||||||
|
|
||||||
$('#visualselector-tab').click(function () {
|
|
||||||
$("img#selector-background").off('load');
|
|
||||||
state_clicked = false;
|
|
||||||
current_selected_i = false;
|
|
||||||
bootstrap_visualselector();
|
|
||||||
});
|
|
||||||
|
|
||||||
$(document).on('keydown', function(event) {
|
|
||||||
if ($("img#selector-background").is(":visible")) {
|
|
||||||
if (event.key == "Escape") {
|
|
||||||
state_clicked=false;
|
|
||||||
ctx.clearRect(0, 0, c.width, c.height);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
// For when the page loads
|
|
||||||
if(!window.location.hash || window.location.hash != '#visualselector') {
|
|
||||||
$("img#selector-background").attr('src','');
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Handle clearing button/link
|
|
||||||
$('#clear-selector').on('click', function(event) {
|
|
||||||
if(!state_clicked) {
|
|
||||||
alert('Oops, Nothing selected!');
|
|
||||||
}
|
|
||||||
state_clicked=false;
|
|
||||||
ctx.clearRect(0, 0, c.width, c.height);
|
|
||||||
xctx.clearRect(0, 0, c.width, c.height);
|
|
||||||
$("#css_filter").val('');
|
|
||||||
});
|
|
||||||
|
|
||||||
|
|
||||||
bootstrap_visualselector();
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
function bootstrap_visualselector() {
|
|
||||||
if ( 1 ) {
|
|
||||||
// bootstrap it, this will trigger everything else
|
|
||||||
$("img#selector-background").bind('load', function () {
|
|
||||||
console.log("Loaded background...");
|
|
||||||
c = document.getElementById("selector-canvas");
|
|
||||||
// greyed out fill context
|
|
||||||
xctx = c.getContext("2d");
|
|
||||||
// redline highlight context
|
|
||||||
ctx = c.getContext("2d");
|
|
||||||
current_default_xpath =$("#css_filter").val();
|
|
||||||
fetch_data();
|
|
||||||
$('#selector-canvas').off("mousemove mousedown");
|
|
||||||
// screenshot_url defined in the edit.html template
|
|
||||||
}).attr("src", screenshot_url);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
function fetch_data() {
|
|
||||||
// Image is ready
|
|
||||||
$('.fetching-update-notice').html("Fetching element data..");
|
|
||||||
|
|
||||||
$.ajax({
|
|
||||||
url: watch_visual_selector_data_url,
|
|
||||||
context: document.body
|
|
||||||
}).done(function (data) {
|
|
||||||
$('.fetching-update-notice').html("Rendering..");
|
|
||||||
selector_data = data;
|
|
||||||
console.log("Reported browser width from backend: "+data['browser_width']);
|
|
||||||
state_clicked=false;
|
|
||||||
set_scale();
|
|
||||||
reflow_selector();
|
|
||||||
$('.fetching-update-notice').fadeOut();
|
|
||||||
});
|
|
||||||
};
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
function set_scale() {
|
|
||||||
|
|
||||||
// some things to check if the scaling doesnt work
|
|
||||||
// - that the widths/sizes really are about the actual screen size cat elements.json |grep -o width......|sort|uniq
|
|
||||||
selector_image = $("img#selector-background")[0];
|
|
||||||
selector_image_rect = selector_image.getBoundingClientRect();
|
|
||||||
|
|
||||||
// make the canvas the same size as the image
|
|
||||||
$('#selector-canvas').attr('height', selector_image_rect.height);
|
|
||||||
$('#selector-canvas').attr('width', selector_image_rect.width);
|
|
||||||
$('#selector-wrapper').attr('width', selector_image_rect.width);
|
|
||||||
x_scale = selector_image_rect.width / selector_data['browser_width'];
|
|
||||||
y_scale = selector_image_rect.height / selector_image.naturalHeight;
|
|
||||||
ctx.strokeStyle = 'rgba(255,0,0, 0.9)';
|
|
||||||
ctx.fillStyle = 'rgba(255,0,0, 0.1)';
|
|
||||||
ctx.lineWidth = 3;
|
|
||||||
console.log("scaling set x: "+x_scale+" by y:"+y_scale);
|
|
||||||
$("#selector-current-xpath").css('max-width', selector_image_rect.width);
|
|
||||||
}
|
|
||||||
|
|
||||||
function reflow_selector() {
|
|
||||||
$(window).resize(function() {
|
|
||||||
set_scale();
|
|
||||||
highlight_current_selected_i();
|
|
||||||
});
|
|
||||||
var selector_currnt_xpath_text=$("#selector-current-xpath span");
|
|
||||||
|
|
||||||
set_scale();
|
|
||||||
|
|
||||||
console.log(selector_data['size_pos'].length + " selectors found");
|
|
||||||
|
|
||||||
// highlight the default one if we can find it in the xPath list
|
|
||||||
// or the xpath matches the default one
|
|
||||||
found = false;
|
|
||||||
if(current_default_xpath.length) {
|
|
||||||
for (var i = selector_data['size_pos'].length; i!==0; i--) {
|
|
||||||
var sel = selector_data['size_pos'][i-1];
|
|
||||||
if(selector_data['size_pos'][i - 1].xpath == current_default_xpath) {
|
|
||||||
console.log("highlighting "+current_default_xpath);
|
|
||||||
current_selected_i = i-1;
|
|
||||||
highlight_current_selected_i();
|
|
||||||
found = true;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if(!found) {
|
|
||||||
alert("Unfortunately your existing CSS/xPath Filter was no longer found!");
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
$('#selector-canvas').bind('mousemove', function (e) {
|
|
||||||
if(state_clicked) {
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
ctx.clearRect(0, 0, c.width, c.height);
|
|
||||||
current_selected_i=null;
|
|
||||||
|
|
||||||
// Add in offset
|
|
||||||
if ((typeof e.offsetX === "undefined" || typeof e.offsetY === "undefined") || (e.offsetX === 0 && e.offsetY === 0)) {
|
|
||||||
var targetOffset = $(e.target).offset();
|
|
||||||
e.offsetX = e.pageX - targetOffset.left;
|
|
||||||
e.offsetY = e.pageY - targetOffset.top;
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reverse order - the most specific one should be deeper/"laster"
|
|
||||||
// Basically, find the most 'deepest'
|
|
||||||
var found=0;
|
|
||||||
ctx.fillStyle = 'rgba(205,0,0,0.35)';
|
|
||||||
for (var i = selector_data['size_pos'].length; i!==0; i--) {
|
|
||||||
// draw all of them? let them choose somehow?
|
|
||||||
var sel = selector_data['size_pos'][i-1];
|
|
||||||
// If we are in a bounding-box
|
|
||||||
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale
|
|
||||||
&&
|
|
||||||
e.offsetX > sel.left * y_scale && e.offsetX < sel.left * y_scale + sel.width * y_scale
|
|
||||||
|
|
||||||
) {
|
|
||||||
|
|
||||||
// FOUND ONE
|
|
||||||
set_current_selected_text(sel.xpath);
|
|
||||||
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
|
|
||||||
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
|
|
||||||
|
|
||||||
// no need to keep digging
|
|
||||||
// @todo or, O to go out/up, I to go in
|
|
||||||
// or double click to go up/out the selector?
|
|
||||||
current_selected_i=i-1;
|
|
||||||
found+=1;
|
|
||||||
break;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}.debounce(5));
|
|
||||||
|
|
||||||
function set_current_selected_text(s) {
|
|
||||||
selector_currnt_xpath_text[0].innerHTML=s;
|
|
||||||
}
|
|
||||||
|
|
||||||
function highlight_current_selected_i() {
|
|
||||||
if(state_clicked) {
|
|
||||||
state_clicked=false;
|
|
||||||
xctx.clearRect(0,0,c.width, c.height);
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
var sel = selector_data['size_pos'][current_selected_i];
|
|
||||||
if (sel[0] == '/') {
|
|
||||||
// @todo - not sure just checking / is right
|
|
||||||
$("#css_filter").val('xpath:'+sel.xpath);
|
|
||||||
} else {
|
|
||||||
$("#css_filter").val(sel.xpath);
|
|
||||||
}
|
|
||||||
xctx.fillStyle = 'rgba(205,205,205,0.95)';
|
|
||||||
xctx.strokeStyle = 'rgba(225,0,0,0.9)';
|
|
||||||
xctx.lineWidth = 3;
|
|
||||||
xctx.fillRect(0,0,c.width, c.height);
|
|
||||||
// Clear out what only should be seen (make a clear/clean spot)
|
|
||||||
xctx.clearRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
|
|
||||||
xctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
|
|
||||||
state_clicked=true;
|
|
||||||
set_current_selected_text(sel.xpath);
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|
||||||
$('#selector-canvas').bind('mousedown', function (e) {
|
|
||||||
highlight_current_selected_i();
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
});
|
|
||||||
@@ -4,7 +4,6 @@ $(function () {
|
|||||||
$(this).closest('.unviewed').removeClass('unviewed');
|
$(this).closest('.unviewed').removeClass('unviewed');
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|
||||||
$('.with-share-link > *').click(function () {
|
$('.with-share-link > *').click(function () {
|
||||||
$("#copied-clipboard").remove();
|
$("#copied-clipboard").remove();
|
||||||
|
|
||||||
@@ -21,6 +20,5 @@ $(function () {
|
|||||||
$(this).remove();
|
$(this).remove();
|
||||||
});
|
});
|
||||||
});
|
});
|
||||||
|
|
||||||
});
|
});
|
||||||
|
|
||||||
|
|||||||
@@ -338,8 +338,7 @@ footer {
|
|||||||
padding-top: 110px; }
|
padding-top: 110px; }
|
||||||
div.tabs.collapsable ul li {
|
div.tabs.collapsable ul li {
|
||||||
display: block;
|
display: block;
|
||||||
border-radius: 0px;
|
border-radius: 0px; }
|
||||||
margin-right: 0px; }
|
|
||||||
input[type='text'] {
|
input[type='text'] {
|
||||||
width: 100%; }
|
width: 100%; }
|
||||||
/*
|
/*
|
||||||
@@ -353,8 +352,6 @@ and also iPads specifically.
|
|||||||
/* Hide table headers (but not display: none;, for accessibility) */ }
|
/* Hide table headers (but not display: none;, for accessibility) */ }
|
||||||
.watch-table thead, .watch-table tbody, .watch-table th, .watch-table td, .watch-table tr {
|
.watch-table thead, .watch-table tbody, .watch-table th, .watch-table td, .watch-table tr {
|
||||||
display: block; }
|
display: block; }
|
||||||
.watch-table .last-checked > span {
|
|
||||||
vertical-align: middle; }
|
|
||||||
.watch-table .last-checked::before {
|
.watch-table .last-checked::before {
|
||||||
color: #555;
|
color: #555;
|
||||||
content: "Last Checked "; }
|
content: "Last Checked "; }
|
||||||
@@ -372,8 +369,7 @@ and also iPads specifically.
|
|||||||
.watch-table td {
|
.watch-table td {
|
||||||
/* Behave like a "row" */
|
/* Behave like a "row" */
|
||||||
border: none;
|
border: none;
|
||||||
border-bottom: 1px solid #eee;
|
border-bottom: 1px solid #eee; }
|
||||||
vertical-align: middle; }
|
|
||||||
.watch-table td:before {
|
.watch-table td:before {
|
||||||
/* Top/left values mimic padding */
|
/* Top/left values mimic padding */
|
||||||
top: 6px;
|
top: 6px;
|
||||||
@@ -433,15 +429,6 @@ and also iPads specifically.
|
|||||||
.tab-pane-inner:target {
|
.tab-pane-inner:target {
|
||||||
display: block; }
|
display: block; }
|
||||||
|
|
||||||
#beta-logo {
|
|
||||||
height: 50px;
|
|
||||||
right: -3px;
|
|
||||||
top: -3px;
|
|
||||||
position: absolute; }
|
|
||||||
|
|
||||||
#selector-header {
|
|
||||||
padding-bottom: 1em; }
|
|
||||||
|
|
||||||
.edit-form {
|
.edit-form {
|
||||||
min-width: 70%;
|
min-width: 70%;
|
||||||
/* so it cant overflow */
|
/* so it cant overflow */
|
||||||
@@ -467,24 +454,6 @@ ul {
|
|||||||
.time-check-widget tr input[type="number"] {
|
.time-check-widget tr input[type="number"] {
|
||||||
width: 5em; }
|
width: 5em; }
|
||||||
|
|
||||||
#selector-wrapper {
|
|
||||||
height: 600px;
|
|
||||||
overflow-y: scroll;
|
|
||||||
position: relative; }
|
|
||||||
#selector-wrapper > img {
|
|
||||||
position: absolute;
|
|
||||||
z-index: 4;
|
|
||||||
max-width: 100%; }
|
|
||||||
#selector-wrapper > canvas {
|
|
||||||
position: relative;
|
|
||||||
z-index: 5;
|
|
||||||
max-width: 100%; }
|
|
||||||
#selector-wrapper > canvas:hover {
|
|
||||||
cursor: pointer; }
|
|
||||||
|
|
||||||
#selector-current-xpath {
|
|
||||||
font-size: 80%; }
|
|
||||||
|
|
||||||
#webdriver-override-options input[type="number"] {
|
#webdriver-override-options input[type="number"] {
|
||||||
width: 5em; }
|
width: 5em; }
|
||||||
|
|
||||||
@@ -493,42 +462,3 @@ ul {
|
|||||||
|
|
||||||
#api-key-copy {
|
#api-key-copy {
|
||||||
color: #0078e7; }
|
color: #0078e7; }
|
||||||
|
|
||||||
/* spinner */
|
|
||||||
.loader,
|
|
||||||
.loader:after {
|
|
||||||
border-radius: 50%;
|
|
||||||
width: 10px;
|
|
||||||
height: 10px; }
|
|
||||||
|
|
||||||
.loader {
|
|
||||||
margin: 0px auto;
|
|
||||||
font-size: 3px;
|
|
||||||
vertical-align: middle;
|
|
||||||
display: inline-block;
|
|
||||||
text-indent: -9999em;
|
|
||||||
border-top: 1.1em solid rgba(38, 104, 237, 0.2);
|
|
||||||
border-right: 1.1em solid rgba(38, 104, 237, 0.2);
|
|
||||||
border-bottom: 1.1em solid rgba(38, 104, 237, 0.2);
|
|
||||||
border-left: 1.1em solid #2668ed;
|
|
||||||
-webkit-transform: translateZ(0);
|
|
||||||
-ms-transform: translateZ(0);
|
|
||||||
transform: translateZ(0);
|
|
||||||
-webkit-animation: load8 1.1s infinite linear;
|
|
||||||
animation: load8 1.1s infinite linear; }
|
|
||||||
|
|
||||||
@-webkit-keyframes load8 {
|
|
||||||
0% {
|
|
||||||
-webkit-transform: rotate(0deg);
|
|
||||||
transform: rotate(0deg); }
|
|
||||||
100% {
|
|
||||||
-webkit-transform: rotate(360deg);
|
|
||||||
transform: rotate(360deg); } }
|
|
||||||
|
|
||||||
@keyframes load8 {
|
|
||||||
0% {
|
|
||||||
-webkit-transform: rotate(0deg);
|
|
||||||
transform: rotate(0deg); }
|
|
||||||
100% {
|
|
||||||
-webkit-transform: rotate(360deg);
|
|
||||||
transform: rotate(360deg); } }
|
|
||||||
|
|||||||
@@ -469,7 +469,6 @@ footer {
|
|||||||
div.tabs.collapsable ul li {
|
div.tabs.collapsable ul li {
|
||||||
display: block;
|
display: block;
|
||||||
border-radius: 0px;
|
border-radius: 0px;
|
||||||
margin-right: 0px;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
input[type='text'] {
|
input[type='text'] {
|
||||||
@@ -487,11 +486,6 @@ and also iPads specifically.
|
|||||||
display: block;
|
display: block;
|
||||||
}
|
}
|
||||||
|
|
||||||
.last-checked {
|
|
||||||
> span {
|
|
||||||
vertical-align: middle;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
.last-checked::before {
|
.last-checked::before {
|
||||||
color: #555;
|
color: #555;
|
||||||
content: "Last Checked ";
|
content: "Last Checked ";
|
||||||
@@ -522,7 +516,7 @@ and also iPads specifically.
|
|||||||
/* Behave like a "row" */
|
/* Behave like a "row" */
|
||||||
border: none;
|
border: none;
|
||||||
border-bottom: 1px solid #eee;
|
border-bottom: 1px solid #eee;
|
||||||
vertical-align: middle;
|
|
||||||
&:before {
|
&:before {
|
||||||
/* Top/left values mimic padding */
|
/* Top/left values mimic padding */
|
||||||
top: 6px;
|
top: 6px;
|
||||||
@@ -619,18 +613,6 @@ $form-edge-padding: 20px;
|
|||||||
padding: 0px;
|
padding: 0px;
|
||||||
}
|
}
|
||||||
|
|
||||||
#beta-logo {
|
|
||||||
height: 50px;
|
|
||||||
// looks better when it's hanging off a little
|
|
||||||
right: -3px;
|
|
||||||
top: -3px;
|
|
||||||
position: absolute;
|
|
||||||
}
|
|
||||||
|
|
||||||
#selector-header {
|
|
||||||
padding-bottom: 1em;
|
|
||||||
}
|
|
||||||
|
|
||||||
.edit-form {
|
.edit-form {
|
||||||
min-width: 70%;
|
min-width: 70%;
|
||||||
/* so it cant overflow */
|
/* so it cant overflow */
|
||||||
@@ -667,30 +649,6 @@ ul {
|
|||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
#selector-wrapper {
|
|
||||||
height: 600px;
|
|
||||||
overflow-y: scroll;
|
|
||||||
position: relative;
|
|
||||||
//width: 100%;
|
|
||||||
> img {
|
|
||||||
position: absolute;
|
|
||||||
z-index: 4;
|
|
||||||
max-width: 100%;
|
|
||||||
}
|
|
||||||
>canvas {
|
|
||||||
position: relative;
|
|
||||||
z-index: 5;
|
|
||||||
max-width: 100%;
|
|
||||||
&:hover {
|
|
||||||
cursor: pointer;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#selector-current-xpath {
|
|
||||||
font-size: 80%;
|
|
||||||
}
|
|
||||||
|
|
||||||
#webdriver-override-options {
|
#webdriver-override-options {
|
||||||
input[type="number"] {
|
input[type="number"] {
|
||||||
width: 5em;
|
width: 5em;
|
||||||
@@ -706,48 +664,3 @@ ul {
|
|||||||
#api-key-copy {
|
#api-key-copy {
|
||||||
color: #0078e7;
|
color: #0078e7;
|
||||||
}
|
}
|
||||||
|
|
||||||
/* spinner */
|
|
||||||
.loader,
|
|
||||||
.loader:after {
|
|
||||||
border-radius: 50%;
|
|
||||||
width: 10px;
|
|
||||||
height: 10px;
|
|
||||||
}
|
|
||||||
.loader {
|
|
||||||
margin: 0px auto;
|
|
||||||
font-size: 3px;
|
|
||||||
vertical-align: middle;
|
|
||||||
display: inline-block;
|
|
||||||
text-indent: -9999em;
|
|
||||||
border-top: 1.1em solid rgba(38,104,237, 0.2);
|
|
||||||
border-right: 1.1em solid rgba(38,104,237, 0.2);
|
|
||||||
border-bottom: 1.1em solid rgba(38,104,237, 0.2);
|
|
||||||
border-left: 1.1em solid #2668ed;
|
|
||||||
-webkit-transform: translateZ(0);
|
|
||||||
-ms-transform: translateZ(0);
|
|
||||||
transform: translateZ(0);
|
|
||||||
-webkit-animation: load8 1.1s infinite linear;
|
|
||||||
animation: load8 1.1s infinite linear;
|
|
||||||
}
|
|
||||||
@-webkit-keyframes load8 {
|
|
||||||
0% {
|
|
||||||
-webkit-transform: rotate(0deg);
|
|
||||||
transform: rotate(0deg);
|
|
||||||
}
|
|
||||||
100% {
|
|
||||||
-webkit-transform: rotate(360deg);
|
|
||||||
transform: rotate(360deg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
@keyframes load8 {
|
|
||||||
0% {
|
|
||||||
-webkit-transform: rotate(0deg);
|
|
||||||
transform: rotate(0deg);
|
|
||||||
}
|
|
||||||
100% {
|
|
||||||
-webkit-transform: rotate(360deg);
|
|
||||||
transform: rotate(360deg);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
|
|||||||
@@ -40,7 +40,7 @@ class ChangeDetectionStore:
|
|||||||
|
|
||||||
# Base definition for all watchers
|
# Base definition for all watchers
|
||||||
# deepcopy part of #569 - not sure why its needed exactly
|
# deepcopy part of #569 - not sure why its needed exactly
|
||||||
self.generic_definition = deepcopy(Watch.model(datastore_path = datastore_path, default={}))
|
self.generic_definition = deepcopy(Watch.model())
|
||||||
|
|
||||||
if path.isfile('changedetectionio/source.txt'):
|
if path.isfile('changedetectionio/source.txt'):
|
||||||
with open('changedetectionio/source.txt') as f:
|
with open('changedetectionio/source.txt') as f:
|
||||||
@@ -71,10 +71,13 @@ class ChangeDetectionStore:
|
|||||||
if 'application' in from_disk['settings']:
|
if 'application' in from_disk['settings']:
|
||||||
self.__data['settings']['application'].update(from_disk['settings']['application'])
|
self.__data['settings']['application'].update(from_disk['settings']['application'])
|
||||||
|
|
||||||
# Convert each existing watch back to the Watch.model object
|
# Reinitialise each `watching` with our generic_definition in the case that we add a new var in the future.
|
||||||
|
# @todo pretty sure theres a python we todo this with an abstracted(?) object!
|
||||||
for uuid, watch in self.__data['watching'].items():
|
for uuid, watch in self.__data['watching'].items():
|
||||||
watch['uuid']=uuid
|
_blank = deepcopy(self.generic_definition)
|
||||||
self.__data['watching'][uuid] = Watch.model(datastore_path=self.datastore_path, default=watch)
|
_blank.update(watch)
|
||||||
|
self.__data['watching'].update({uuid: _blank})
|
||||||
|
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
|
||||||
print("Watching:", uuid, self.__data['watching'][uuid]['url'])
|
print("Watching:", uuid, self.__data['watching'][uuid]['url'])
|
||||||
|
|
||||||
# First time ran, doesnt exist.
|
# First time ran, doesnt exist.
|
||||||
@@ -84,7 +87,8 @@ class ChangeDetectionStore:
|
|||||||
|
|
||||||
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
|
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
|
||||||
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
|
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
|
||||||
self.add_watch(url='https://changedetection.io/CHANGELOG.txt', tag='changedetection.io')
|
self.add_watch(url='https://www.gov.uk/coronavirus', tag='Covid')
|
||||||
|
self.add_watch(url='https://changedetection.io/CHANGELOG.txt')
|
||||||
|
|
||||||
self.__data['version_tag'] = version_tag
|
self.__data['version_tag'] = version_tag
|
||||||
|
|
||||||
@@ -127,8 +131,23 @@ class ChangeDetectionStore:
|
|||||||
# Finally start the thread that will manage periodic data saves to JSON
|
# Finally start the thread that will manage periodic data saves to JSON
|
||||||
save_data_thread = threading.Thread(target=self.save_datastore).start()
|
save_data_thread = threading.Thread(target=self.save_datastore).start()
|
||||||
|
|
||||||
|
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
|
||||||
|
def get_newest_history_key(self, uuid):
|
||||||
|
if len(self.__data['watching'][uuid]['history']) == 1:
|
||||||
|
return 0
|
||||||
|
|
||||||
|
dates = list(self.__data['watching'][uuid]['history'].keys())
|
||||||
|
# Convert to int, sort and back to str again
|
||||||
|
# @todo replace datastore getter that does this automatically
|
||||||
|
dates = [int(i) for i in dates]
|
||||||
|
dates.sort(reverse=True)
|
||||||
|
if len(dates):
|
||||||
|
# always keyed as str
|
||||||
|
return str(dates[0])
|
||||||
|
|
||||||
|
return 0
|
||||||
|
|
||||||
def set_last_viewed(self, uuid, timestamp):
|
def set_last_viewed(self, uuid, timestamp):
|
||||||
logging.debug("Setting watch UUID: {} last viewed to {}".format(uuid, int(timestamp)))
|
|
||||||
self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
|
self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
|
||||||
self.needs_write = True
|
self.needs_write = True
|
||||||
|
|
||||||
@@ -152,6 +171,7 @@ class ChangeDetectionStore:
|
|||||||
del (update_obj[dict_key])
|
del (update_obj[dict_key])
|
||||||
|
|
||||||
self.__data['watching'][uuid].update(update_obj)
|
self.__data['watching'][uuid].update(update_obj)
|
||||||
|
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
|
||||||
|
|
||||||
self.needs_write = True
|
self.needs_write = True
|
||||||
|
|
||||||
@@ -159,26 +179,27 @@ class ChangeDetectionStore:
|
|||||||
def threshold_seconds(self):
|
def threshold_seconds(self):
|
||||||
seconds = 0
|
seconds = 0
|
||||||
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
|
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
|
||||||
|
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
|
||||||
for m, n in mtable.items():
|
for m, n in mtable.items():
|
||||||
x = self.__data['settings']['requests']['time_between_check'].get(m)
|
x = self.__data['settings']['requests']['time_between_check'].get(m)
|
||||||
if x:
|
if x:
|
||||||
seconds += x * n
|
seconds += x * n
|
||||||
return seconds
|
return max(seconds, minimum_seconds_recheck_time)
|
||||||
|
|
||||||
@property
|
|
||||||
def has_unviewed(self):
|
|
||||||
for uuid, watch in self.__data['watching'].items():
|
|
||||||
if watch.viewed == False:
|
|
||||||
return True
|
|
||||||
return False
|
|
||||||
|
|
||||||
@property
|
@property
|
||||||
def data(self):
|
def data(self):
|
||||||
has_unviewed = False
|
has_unviewed = False
|
||||||
for uuid, watch in self.__data['watching'].items():
|
for uuid, v in self.__data['watching'].items():
|
||||||
|
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
|
||||||
|
if int(v['newest_history_key']) <= int(v['last_viewed']):
|
||||||
|
self.__data['watching'][uuid]['viewed'] = True
|
||||||
|
|
||||||
|
else:
|
||||||
|
self.__data['watching'][uuid]['viewed'] = False
|
||||||
|
has_unviewed = True
|
||||||
|
|
||||||
# #106 - Be sure this is None on empty string, False, None, etc
|
# #106 - Be sure this is None on empty string, False, None, etc
|
||||||
# Default var for fetch_backend
|
# Default var for fetch_backend
|
||||||
# @todo this may not be needed anymore, or could be easily removed
|
|
||||||
if not self.__data['watching'][uuid]['fetch_backend']:
|
if not self.__data['watching'][uuid]['fetch_backend']:
|
||||||
self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend']
|
self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend']
|
||||||
|
|
||||||
@@ -187,6 +208,8 @@ class ChangeDetectionStore:
|
|||||||
if not self.__data['settings']['application']['base_url']:
|
if not self.__data['settings']['application']['base_url']:
|
||||||
self.__data['settings']['application']['base_url'] = env_base_url.strip('" ')
|
self.__data['settings']['application']['base_url'] = env_base_url.strip('" ')
|
||||||
|
|
||||||
|
self.__data['has_unviewed'] = has_unviewed
|
||||||
|
|
||||||
return self.__data
|
return self.__data
|
||||||
|
|
||||||
def get_all_tags(self):
|
def get_all_tags(self):
|
||||||
@@ -217,11 +240,11 @@ class ChangeDetectionStore:
|
|||||||
|
|
||||||
# GitHub #30 also delete history records
|
# GitHub #30 also delete history records
|
||||||
for uuid in self.data['watching']:
|
for uuid in self.data['watching']:
|
||||||
for path in self.data['watching'][uuid].history.values():
|
for path in self.data['watching'][uuid]['history'].values():
|
||||||
self.unlink_history_file(path)
|
self.unlink_history_file(path)
|
||||||
|
|
||||||
else:
|
else:
|
||||||
for path in self.data['watching'][uuid].history.values():
|
for path in self.data['watching'][uuid]['history'].values():
|
||||||
self.unlink_history_file(path)
|
self.unlink_history_file(path)
|
||||||
|
|
||||||
del self.data['watching'][uuid]
|
del self.data['watching'][uuid]
|
||||||
@@ -253,25 +276,13 @@ class ChangeDetectionStore:
|
|||||||
def scrub_watch(self, uuid):
|
def scrub_watch(self, uuid):
|
||||||
import pathlib
|
import pathlib
|
||||||
|
|
||||||
self.__data['watching'][uuid].update(
|
self.__data['watching'][uuid].update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'newest_history_key': 0, 'previous_md5': False})
|
||||||
{'last_checked': 0,
|
|
||||||
'last_changed': 0,
|
|
||||||
'last_viewed': 0,
|
|
||||||
'previous_md5': False,
|
|
||||||
'last_notification_error': False,
|
|
||||||
'last_error': False})
|
|
||||||
|
|
||||||
# JSON Data, Screenshots, Textfiles (history index and snapshots), HTML in the future etc
|
|
||||||
for item in pathlib.Path(os.path.join(self.datastore_path, uuid)).rglob("*.*"):
|
|
||||||
unlink(item)
|
|
||||||
|
|
||||||
# Force the attr to recalculate
|
|
||||||
bump = self.__data['watching'][uuid].history
|
|
||||||
|
|
||||||
self.needs_write_urgent = True
|
self.needs_write_urgent = True
|
||||||
|
|
||||||
def add_watch(self, url, tag="", extras=None, write_to_disk_now=True):
|
for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"):
|
||||||
|
unlink(item)
|
||||||
|
|
||||||
|
def add_watch(self, url, tag="", extras=None, write_to_disk_now=True):
|
||||||
if extras is None:
|
if extras is None:
|
||||||
extras = {}
|
extras = {}
|
||||||
# should always be str
|
# should always be str
|
||||||
@@ -297,7 +308,7 @@ class ChangeDetectionStore:
|
|||||||
'body', 'method',
|
'body', 'method',
|
||||||
'ignore_text', 'css_filter',
|
'ignore_text', 'css_filter',
|
||||||
'subtractive_selectors', 'trigger_text',
|
'subtractive_selectors', 'trigger_text',
|
||||||
'extract_title_as_title', 'extract_text']:
|
'extract_title_as_title']:
|
||||||
if res.get(k):
|
if res.get(k):
|
||||||
apply_extras[k] = res[k]
|
apply_extras[k] = res[k]
|
||||||
|
|
||||||
@@ -307,15 +318,16 @@ class ChangeDetectionStore:
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
with self.lock:
|
with self.lock:
|
||||||
|
# @todo use a common generic version of this
|
||||||
|
new_uuid = str(uuid_builder.uuid4())
|
||||||
# #Re 569
|
# #Re 569
|
||||||
new_watch = Watch.model(datastore_path=self.datastore_path, default={
|
# Not sure why deepcopy was needed here, sometimes new watches would appear to already have 'history' set
|
||||||
|
# I assumed this would instantiate a new object but somehow an existing dict was getting used
|
||||||
|
new_watch = deepcopy(Watch.model({
|
||||||
'url': url,
|
'url': url,
|
||||||
'tag': tag
|
'tag': tag
|
||||||
})
|
}))
|
||||||
|
|
||||||
new_uuid = new_watch['uuid']
|
|
||||||
logging.debug("Added URL {} - {}".format(url, new_uuid))
|
|
||||||
|
|
||||||
for k in ['uuid', 'history', 'last_checked', 'last_changed', 'newest_history_key', 'previous_md5', 'viewed']:
|
for k in ['uuid', 'history', 'last_checked', 'last_changed', 'newest_history_key', 'previous_md5', 'viewed']:
|
||||||
if k in apply_extras:
|
if k in apply_extras:
|
||||||
@@ -335,6 +347,23 @@ class ChangeDetectionStore:
|
|||||||
self.sync_to_json()
|
self.sync_to_json()
|
||||||
return new_uuid
|
return new_uuid
|
||||||
|
|
||||||
|
# Save some text file to the appropriate path and bump the history
|
||||||
|
# result_obj from fetch_site_status.run()
|
||||||
|
def save_history_text(self, watch_uuid, contents):
|
||||||
|
import uuid
|
||||||
|
|
||||||
|
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
||||||
|
# Incase the operator deleted it, check and create.
|
||||||
|
if not os.path.isdir(output_path):
|
||||||
|
mkdir(output_path)
|
||||||
|
|
||||||
|
fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
|
||||||
|
with open(fname, 'wb') as f:
|
||||||
|
f.write(contents)
|
||||||
|
f.close()
|
||||||
|
|
||||||
|
return fname
|
||||||
|
|
||||||
def get_screenshot(self, watch_uuid):
|
def get_screenshot(self, watch_uuid):
|
||||||
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
||||||
fname = "{}/last-screenshot.png".format(output_path)
|
fname = "{}/last-screenshot.png".format(output_path)
|
||||||
@@ -343,15 +372,6 @@ class ChangeDetectionStore:
|
|||||||
|
|
||||||
return False
|
return False
|
||||||
|
|
||||||
def visualselector_data_is_ready(self, watch_uuid):
|
|
||||||
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
|
||||||
screenshot_filename = "{}/last-screenshot.png".format(output_path)
|
|
||||||
elements_index_filename = "{}/elements.json".format(output_path)
|
|
||||||
if path.isfile(screenshot_filename) and path.isfile(elements_index_filename) :
|
|
||||||
return True
|
|
||||||
|
|
||||||
return False
|
|
||||||
|
|
||||||
# Save as PNG, PNG is larger but better for doing visual diff in the future
|
# Save as PNG, PNG is larger but better for doing visual diff in the future
|
||||||
def save_screenshot(self, watch_uuid, screenshot: bytes):
|
def save_screenshot(self, watch_uuid, screenshot: bytes):
|
||||||
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
||||||
@@ -360,14 +380,6 @@ class ChangeDetectionStore:
|
|||||||
f.write(screenshot)
|
f.write(screenshot)
|
||||||
f.close()
|
f.close()
|
||||||
|
|
||||||
def save_xpath_data(self, watch_uuid, data):
|
|
||||||
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
|
|
||||||
fname = "{}/elements.json".format(output_path)
|
|
||||||
with open(fname, 'w') as f:
|
|
||||||
f.write(json.dumps(data))
|
|
||||||
f.close()
|
|
||||||
|
|
||||||
|
|
||||||
def sync_to_json(self):
|
def sync_to_json(self):
|
||||||
logging.info("Saving JSON..")
|
logging.info("Saving JSON..")
|
||||||
print("Saving JSON..")
|
print("Saving JSON..")
|
||||||
@@ -420,8 +432,8 @@ class ChangeDetectionStore:
|
|||||||
|
|
||||||
index=[]
|
index=[]
|
||||||
for uuid in self.data['watching']:
|
for uuid in self.data['watching']:
|
||||||
for id in self.data['watching'][uuid].history:
|
for id in self.data['watching'][uuid]['history']:
|
||||||
index.append(self.data['watching'][uuid].history[str(id)])
|
index.append(self.data['watching'][uuid]['history'][str(id)])
|
||||||
|
|
||||||
import pathlib
|
import pathlib
|
||||||
|
|
||||||
@@ -492,28 +504,3 @@ class ChangeDetectionStore:
|
|||||||
# Only upgrade individual watch time if it was set
|
# Only upgrade individual watch time if it was set
|
||||||
if watch.get('minutes_between_check', False):
|
if watch.get('minutes_between_check', False):
|
||||||
self.data['watching'][uuid]['time_between_check']['minutes'] = watch['minutes_between_check']
|
self.data['watching'][uuid]['time_between_check']['minutes'] = watch['minutes_between_check']
|
||||||
|
|
||||||
# Move the history list to a flat text file index
|
|
||||||
# Better than SQLite because this list is only appended to, and works across NAS / NFS type setups
|
|
||||||
def update_2(self):
|
|
||||||
# @todo test running this on a newly updated one (when this already ran)
|
|
||||||
for uuid, watch in self.data['watching'].items():
|
|
||||||
history = []
|
|
||||||
|
|
||||||
if watch.get('history', False):
|
|
||||||
for d, p in watch['history'].items():
|
|
||||||
d = int(d) # Used to be keyed as str, we'll fix this now too
|
|
||||||
history.append("{},{}\n".format(d,p))
|
|
||||||
|
|
||||||
if len(history):
|
|
||||||
target_path = os.path.join(self.datastore_path, uuid)
|
|
||||||
if os.path.exists(target_path):
|
|
||||||
with open(os.path.join(target_path, "history.txt"), "w") as f:
|
|
||||||
f.writelines(history)
|
|
||||||
else:
|
|
||||||
logging.warning("Datastore history directory {} does not exist, skipping history import.".format(target_path))
|
|
||||||
|
|
||||||
# No longer needed, dynamically pulled from the disk when needed.
|
|
||||||
# But we should set it back to a empty dict so we don't break if this schema runs on an earlier version.
|
|
||||||
# In the distant future we can remove this entirely
|
|
||||||
self.data['watching'][uuid]['history'] = {}
|
|
||||||
|
|||||||
@@ -14,7 +14,7 @@
|
|||||||
<li>Use <a target=_new href="https://github.com/caronc/apprise">AppRise URLs</a> for notification to just about any service! <i><a target=_new href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes">Please read the notification services wiki here for important configuration notes</a></i>.</li>
|
<li>Use <a target=_new href="https://github.com/caronc/apprise">AppRise URLs</a> for notification to just about any service! <i><a target=_new href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes">Please read the notification services wiki here for important configuration notes</a></i>.</li>
|
||||||
<li><code>discord://</code> only supports a maximum <strong>2,000 characters</strong> of notification text, including the title.</li>
|
<li><code>discord://</code> only supports a maximum <strong>2,000 characters</strong> of notification text, including the title.</li>
|
||||||
<li><code>tgram://</code> bots cant send messages to other bots, so you should specify chat ID of non-bot user.</li>
|
<li><code>tgram://</code> bots cant send messages to other bots, so you should specify chat ID of non-bot user.</li>
|
||||||
<li><code>tgram://</code> only supports very limited HTML and can fail when extra tags are sent, <a href="https://core.telegram.org/bots/api#html-style">read more here</a> (or use plaintext/markdown format)</li>
|
<li>Go here for <a href="{{url_for('notification_logs')}}">notification debug logs</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
<br/>
|
<br/>
|
||||||
@@ -22,7 +22,6 @@
|
|||||||
{% if emailprefix %}
|
{% if emailprefix %}
|
||||||
<a id="add-email-helper" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Add email</a>
|
<a id="add-email-helper" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Add email</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
<a href="{{url_for('notification_logs')}}" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Notification debug logs</a>
|
|
||||||
</div>
|
</div>
|
||||||
<div id="notification-customisation" class="pure-control-group">
|
<div id="notification-customisation" class="pure-control-group">
|
||||||
<div class="pure-control-group">
|
<div class="pure-control-group">
|
||||||
|
|||||||
@@ -1,11 +1,6 @@
|
|||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<script>
|
|
||||||
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
|
|
||||||
</script>
|
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
|
|
||||||
|
|
||||||
<div id="settings">
|
<div id="settings">
|
||||||
<h1>Differences</h1>
|
<h1>Differences</h1>
|
||||||
<form class="pure-form " action="" method="GET">
|
<form class="pure-form " action="" method="GET">
|
||||||
@@ -44,7 +39,9 @@
|
|||||||
<div class="tabs">
|
<div class="tabs">
|
||||||
<ul>
|
<ul>
|
||||||
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
|
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
|
||||||
<li class="tab" id="screenshot-tab"><a href="#screenshot">Screenshot</a></li>
|
{% if screenshot %}
|
||||||
|
<li class="tab"><a href="#screenshot">Current screenshot</a></li>
|
||||||
|
{% endif %}
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -66,21 +63,17 @@
|
|||||||
</table>
|
</table>
|
||||||
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
|
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
|
||||||
</div>
|
</div>
|
||||||
<div class="tab-pane-inner" id="screenshot">
|
|
||||||
<div class="tip">
|
|
||||||
For now, Differences are performed on text, not graphically, only the latest screenshot is available.
|
|
||||||
</div>
|
|
||||||
</br>
|
|
||||||
{% if is_html_webdriver %}
|
|
||||||
{% if screenshot %}
|
{% if screenshot %}
|
||||||
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/>
|
<div class="tab-pane-inner" id="screenshot">
|
||||||
{% else %}
|
<p>
|
||||||
No screenshot available just yet! Try rechecking the page.
|
<i>For now, only the most recent screenshot is saved and displayed.</i>
|
||||||
{% endif %}
|
</p>
|
||||||
{% else %}
|
|
||||||
<strong>Screenshot requires Playwright/WebDriver enabled</strong>
|
<img src="{{url_for('static_content', group='screenshot', filename=uuid)}}">
|
||||||
{% endif %}
|
|
||||||
</div>
|
</div>
|
||||||
|
{% endif %}
|
||||||
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
|
|
||||||
|
|||||||
@@ -5,18 +5,12 @@
|
|||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
|
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
|
||||||
<script>
|
<script>
|
||||||
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
|
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
|
||||||
const watch_visual_selector_data_url="{{url_for('static_content', group='visual_selector_data', filename=uuid)}}";
|
|
||||||
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
|
|
||||||
|
|
||||||
{% if emailprefix %}
|
{% if emailprefix %}
|
||||||
const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');
|
const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|
||||||
</script>
|
</script>
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
|
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
|
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='visual-selector.js')}}" defer></script>
|
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='limit.js')}}" defer></script>
|
|
||||||
|
|
||||||
<div class="edit-form monospaced-textarea">
|
<div class="edit-form monospaced-textarea">
|
||||||
|
|
||||||
@@ -24,7 +18,6 @@
|
|||||||
<ul>
|
<ul>
|
||||||
<li class="tab" id="default-tab"><a href="#general">General</a></li>
|
<li class="tab" id="default-tab"><a href="#general">General</a></li>
|
||||||
<li class="tab"><a href="#request">Request</a></li>
|
<li class="tab"><a href="#request">Request</a></li>
|
||||||
<li class="tab"><a id="visualselector-tab" href="#visualselector">Visual Selector</a></li>
|
|
||||||
<li class="tab"><a href="#filters-and-triggers">Filters & Triggers</a></li>
|
<li class="tab"><a href="#filters-and-triggers">Filters & Triggers</a></li>
|
||||||
<li class="tab"><a href="#notifications">Notifications</a></li>
|
<li class="tab"><a href="#notifications">Notifications</a></li>
|
||||||
</ul>
|
</ul>
|
||||||
@@ -199,57 +192,6 @@ nav
|
|||||||
</span>
|
</span>
|
||||||
</div>
|
</div>
|
||||||
</fieldset>
|
</fieldset>
|
||||||
<fieldset>
|
|
||||||
<div class="pure-control-group">
|
|
||||||
{{ render_field(form.extract_text, rows=5, placeholder="\d+ online") }}
|
|
||||||
<span class="pure-form-message-inline">
|
|
||||||
<ul>
|
|
||||||
<li>Extracts text in the final output after other filters using regular expressions, for example <code>\d+ online</code></li>
|
|
||||||
<li>One line per regular-expression.</li>
|
|
||||||
</ul>
|
|
||||||
</span>
|
|
||||||
</div>
|
|
||||||
</fieldset>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="tab-pane-inner visual-selector-ui" id="visualselector">
|
|
||||||
<img id="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}">
|
|
||||||
|
|
||||||
<fieldset>
|
|
||||||
<div class="pure-control-group">
|
|
||||||
{% if visualselector_enabled %}
|
|
||||||
{% if visualselector_data_is_ready %}
|
|
||||||
<div id="selector-header">
|
|
||||||
<a id="clear-selector" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Clear selection</a>
|
|
||||||
<i class="fetching-update-notice" style="font-size: 80%;">One moment, fetching screenshot and element information..</i>
|
|
||||||
</div>
|
|
||||||
<div id="selector-wrapper">
|
|
||||||
<!-- request the screenshot and get the element offset info ready -->
|
|
||||||
<!-- use img src ready load to know everything is ready to map out -->
|
|
||||||
<!-- @todo: maybe something interesting like a field to select 'elements that contain text... and their parents n' -->
|
|
||||||
<img id="selector-background" />
|
|
||||||
<canvas id="selector-canvas"></canvas>
|
|
||||||
|
|
||||||
</div>
|
|
||||||
<div id="selector-current-xpath" style="overflow-x: hidden"><strong>Currently:</strong> <span class="text">Loading...</span></div>
|
|
||||||
|
|
||||||
<span class="pure-form-message-inline">
|
|
||||||
<p><span style="font-weight: bold">Beta!</span> The Visual Selector is new and there may be minor bugs, please report pages that dont work, help us to improve this software!</p>
|
|
||||||
</span>
|
|
||||||
|
|
||||||
{% else %}
|
|
||||||
<span class="pure-form-message-inline">Screenshot and element data is not available or not yet ready.</span>
|
|
||||||
{% endif %}
|
|
||||||
{% else %}
|
|
||||||
<span class="pure-form-message-inline">
|
|
||||||
<p>Sorry, this functionality only works with Playwright/Chrome enabled watches.</p>
|
|
||||||
<p>Enable the Playwright Chrome fetcher, or alternatively try our <a href="https://lemonade.changedetection.io/start">very affordable subscription based service</a>.</p>
|
|
||||||
<p>This is because Selenium/WebDriver can not extract full page screenshots reliably.</p>
|
|
||||||
|
|
||||||
</span>
|
|
||||||
{% endif %}
|
|
||||||
</div>
|
|
||||||
</fieldset>
|
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
<div id="actions">
|
<div id="actions">
|
||||||
@@ -259,8 +201,6 @@ nav
|
|||||||
|
|
||||||
<a href="{{url_for('form_delete', uuid=uuid)}}"
|
<a href="{{url_for('form_delete', uuid=uuid)}}"
|
||||||
class="pure-button button-small button-error ">Delete</a>
|
class="pure-button button-small button-error ">Delete</a>
|
||||||
<a href="{{url_for('scrub_watch', uuid=uuid)}}"
|
|
||||||
class="pure-button button-small button-error ">Scrub</a>
|
|
||||||
<a href="{{url_for('form_clone', uuid=uuid)}}"
|
<a href="{{url_for('form_clone', uuid=uuid)}}"
|
||||||
class="pure-button button-small ">Create Copy</a>
|
class="pure-button button-small ">Create Copy</a>
|
||||||
</div>
|
</div>
|
||||||
|
|||||||
@@ -4,7 +4,7 @@
|
|||||||
<div class="edit-form">
|
<div class="edit-form">
|
||||||
<div class="inner">
|
<div class="inner">
|
||||||
|
|
||||||
<h4 style="margin-top: 0px;">Notification debug log</h4>
|
<h4 style="margin-top: 0px;">The following issues were detected when sending notifications</h4>
|
||||||
<div id="notification-error-log">
|
<div id="notification-error-log">
|
||||||
<ul style="font-size: 80%; margin:0px; padding: 0 0 0 7px">
|
<ul style="font-size: 80%; margin:0px; padding: 0 0 0 7px">
|
||||||
{% for log in logs|reverse %}
|
{% for log in logs|reverse %}
|
||||||
|
|||||||
@@ -1,10 +1,6 @@
|
|||||||
{% extends 'base.html' %}
|
{% extends 'base.html' %}
|
||||||
|
|
||||||
{% block content %}
|
{% block content %}
|
||||||
<script>
|
|
||||||
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
|
|
||||||
</script>
|
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
|
|
||||||
|
|
||||||
<div id="settings">
|
<div id="settings">
|
||||||
<h1>Current - {{watch.last_checked|format_timestamp_timeago}}</h1>
|
<h1>Current - {{watch.last_checked|format_timestamp_timeago}}</h1>
|
||||||
@@ -14,7 +10,9 @@
|
|||||||
<div class="tabs">
|
<div class="tabs">
|
||||||
<ul>
|
<ul>
|
||||||
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
|
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
|
||||||
<li class="tab" id="screenshot-tab"><a href="#screenshot">Screenshot</a></li>
|
{% if screenshot %}
|
||||||
|
<li class="tab"><a href="#screenshot">Current screenshot</a></li>
|
||||||
|
{% endif %}
|
||||||
</ul>
|
</ul>
|
||||||
</div>
|
</div>
|
||||||
|
|
||||||
@@ -33,20 +31,15 @@
|
|||||||
</tbody>
|
</tbody>
|
||||||
</table>
|
</table>
|
||||||
</div>
|
</div>
|
||||||
<div class="tab-pane-inner" id="screenshot">
|
|
||||||
<div class="tip">
|
|
||||||
For now, Differences are performed on text, not graphically, only the latest screenshot is available.
|
|
||||||
</div>
|
|
||||||
</br>
|
|
||||||
{% if is_html_webdriver %}
|
|
||||||
{% if screenshot %}
|
{% if screenshot %}
|
||||||
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/>
|
<div class="tab-pane-inner" id="screenshot">
|
||||||
{% else %}
|
<p>
|
||||||
No screenshot available just yet! Try rechecking the page.
|
<i>For now, only the most recent screenshot is saved and displayed.</i>
|
||||||
{% endif %}
|
</p>
|
||||||
{% else %}
|
|
||||||
<strong>Screenshot requires Playwright/WebDriver enabled</strong>
|
<img src="{{url_for('static_content', group='screenshot', filename=uuid)}}">
|
||||||
{% endif %}
|
|
||||||
</div>
|
</div>
|
||||||
|
{% endif %}
|
||||||
</div>
|
</div>
|
||||||
{% endblock %}
|
{% endblock %}
|
||||||
@@ -32,11 +32,6 @@
|
|||||||
{{ render_field(form.requests.form.time_between_check, class="time-check-widget") }}
|
{{ render_field(form.requests.form.time_between_check, class="time-check-widget") }}
|
||||||
<span class="pure-form-message-inline">Default time for all watches, when the watch does not have a specific time setting.</span>
|
<span class="pure-form-message-inline">Default time for all watches, when the watch does not have a specific time setting.</span>
|
||||||
</div>
|
</div>
|
||||||
<div class="pure-control-group">
|
|
||||||
{{ render_field(form.requests.form.jitter_seconds, class="jitter_seconds") }}
|
|
||||||
<span class="pure-form-message-inline">Example - 3 seconds random jitter could trigger up to 3 seconds earlier or up to 3 seconds later</span>
|
|
||||||
</div>
|
|
||||||
|
|
||||||
<div class="pure-control-group">
|
<div class="pure-control-group">
|
||||||
{% if not hide_remove_pass %}
|
{% if not hide_remove_pass %}
|
||||||
{% if current_user.is_authenticated %}
|
{% if current_user.is_authenticated %}
|
||||||
|
|||||||
@@ -3,7 +3,6 @@
|
|||||||
{% from '_helpers.jinja' import render_simple_field %}
|
{% from '_helpers.jinja' import render_simple_field %}
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
|
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
|
||||||
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
|
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
|
||||||
|
|
||||||
<div class="box">
|
<div class="box">
|
||||||
|
|
||||||
<form class="pure-form" action="{{ url_for('form_watch_add') }}" method="POST" id="new-watch-form">
|
<form class="pure-form" action="{{ url_for('form_watch_add') }}" method="POST" id="new-watch-form">
|
||||||
@@ -46,7 +45,7 @@
|
|||||||
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
|
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
|
||||||
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %}
|
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %}
|
||||||
{% if watch.paused is defined and watch.paused != False %}paused{% endif %}
|
{% if watch.paused is defined and watch.paused != False %}paused{% endif %}
|
||||||
{% if watch.newest_history_key| int > watch.last_viewed and watch.history_n>=2 %}unviewed{% endif %}
|
{% if watch.newest_history_key| int > watch.last_viewed| int %}unviewed{% endif %}
|
||||||
{% if watch.uuid in queued_uuids %}queued{% endif %}">
|
{% if watch.uuid in queued_uuids %}queued{% endif %}">
|
||||||
<td class="inline">{{ loop.index }}</td>
|
<td class="inline">{{ loop.index }}</td>
|
||||||
<td class="inline paused-state state-{{watch.paused}}"><a href="{{url_for('index', pause=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause" title="Pause"/></a></td>
|
<td class="inline paused-state state-{{watch.paused}}"><a href="{{url_for('index', pause=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause" title="Pause"/></a></td>
|
||||||
@@ -67,8 +66,8 @@
|
|||||||
<span class="watch-tag-list">{{ watch.tag}}</span>
|
<span class="watch-tag-list">{{ watch.tag}}</span>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
</td>
|
</td>
|
||||||
<td class="last-checked">{{watch|format_last_checked_time|safe}}</td>
|
<td class="last-checked">{{watch|format_last_checked_time}}</td>
|
||||||
<td class="last-changed">{% if watch.history_n >=2 and watch.last_changed %}
|
<td class="last-changed">{% if watch.history|length >= 2 and watch.last_changed %}
|
||||||
{{watch.last_changed|format_timestamp_timeago}}
|
{{watch.last_changed|format_timestamp_timeago}}
|
||||||
{% else %}
|
{% else %}
|
||||||
Not yet
|
Not yet
|
||||||
@@ -78,10 +77,10 @@
|
|||||||
<a {% if watch.uuid in queued_uuids %}disabled="true"{% endif %} href="{{ url_for('form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}"
|
<a {% if watch.uuid in queued_uuids %}disabled="true"{% endif %} href="{{ url_for('form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}"
|
||||||
class="recheck pure-button button-small pure-button-primary">{% if watch.uuid in queued_uuids %}Queued{% else %}Recheck{% endif %}</a>
|
class="recheck pure-button button-small pure-button-primary">{% if watch.uuid in queued_uuids %}Queued{% else %}Recheck{% endif %}</a>
|
||||||
<a href="{{ url_for('edit_page', uuid=watch.uuid)}}" class="pure-button button-small pure-button-primary">Edit</a>
|
<a href="{{ url_for('edit_page', uuid=watch.uuid)}}" class="pure-button button-small pure-button-primary">Edit</a>
|
||||||
{% if watch.history_n >= 2 %}
|
{% if watch.history|length >= 2 %}
|
||||||
<a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary diff-link">Diff</a>
|
<a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary diff-link">Diff</a>
|
||||||
{% else %}
|
{% else %}
|
||||||
{% if watch.history_n == 1 %}
|
{% if watch.history|length == 1 %}
|
||||||
<a href="{{ url_for('preview_page', uuid=watch.uuid)}}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Preview</a>
|
<a href="{{ url_for('preview_page', uuid=watch.uuid)}}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Preview</a>
|
||||||
{% endif %}
|
{% endif %}
|
||||||
{% endif %}
|
{% endif %}
|
||||||
|
|||||||
@@ -1,2 +0,0 @@
|
|||||||
"""Tests for the app."""
|
|
||||||
|
|
||||||
@@ -1,3 +0,0 @@
|
|||||||
#!/usr/bin/python3
|
|
||||||
|
|
||||||
from .. import conftest
|
|
||||||
@@ -1,48 +0,0 @@
|
|||||||
#!/usr/bin/python3
|
|
||||||
|
|
||||||
import time
|
|
||||||
from flask import url_for
|
|
||||||
from ..util import live_server_setup
|
|
||||||
import logging
|
|
||||||
|
|
||||||
|
|
||||||
def test_fetch_webdriver_content(client, live_server):
|
|
||||||
live_server_setup(live_server)
|
|
||||||
|
|
||||||
#####################
|
|
||||||
res = client.post(
|
|
||||||
url_for("settings_page"),
|
|
||||||
data={"application-empty_pages_are_a_change": "",
|
|
||||||
"requests-time_between_check-minutes": 180,
|
|
||||||
'application-fetch_backend': "html_webdriver"},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
|
|
||||||
assert b"Settings updated." in res.data
|
|
||||||
|
|
||||||
# Add our URL to the import page
|
|
||||||
res = client.post(
|
|
||||||
url_for("import_page"),
|
|
||||||
data={"urls": "https://changedetection.io/ci-test.html"},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
|
|
||||||
assert b"1 Imported" in res.data
|
|
||||||
time.sleep(3)
|
|
||||||
attempt = 0
|
|
||||||
while attempt < 20:
|
|
||||||
res = client.get(url_for("index"))
|
|
||||||
if not b'Checking now' in res.data:
|
|
||||||
break
|
|
||||||
logging.getLogger().info("Waiting for check to not say 'Checking now'..")
|
|
||||||
time.sleep(3)
|
|
||||||
attempt += 1
|
|
||||||
|
|
||||||
|
|
||||||
res = client.get(
|
|
||||||
url_for("preview_page", uuid="first"),
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
logging.getLogger().info("Looking for correct fetched HTML (text) from server")
|
|
||||||
|
|
||||||
assert b'cool it works' in res.data
|
|
||||||
@@ -2,7 +2,7 @@
|
|||||||
|
|
||||||
import time
|
import time
|
||||||
from flask import url_for
|
from flask import url_for
|
||||||
from .util import live_server_setup, extract_api_key_from_UI
|
from .util import live_server_setup
|
||||||
|
|
||||||
import json
|
import json
|
||||||
import uuid
|
import uuid
|
||||||
@@ -53,10 +53,23 @@ def is_valid_uuid(val):
|
|||||||
return False
|
return False
|
||||||
|
|
||||||
|
|
||||||
|
# kinda funky, but works for now
|
||||||
|
def _extract_api_key_from_UI(client):
|
||||||
|
import re
|
||||||
|
res = client.get(
|
||||||
|
url_for("settings_page"),
|
||||||
|
)
|
||||||
|
# <span id="api-key">{{api_key}}</span>
|
||||||
|
|
||||||
|
m = re.search('<span id="api-key">(.+?)</span>', str(res.data))
|
||||||
|
api_key = m.group(1)
|
||||||
|
return api_key.strip()
|
||||||
|
|
||||||
|
|
||||||
def test_api_simple(client, live_server):
|
def test_api_simple(client, live_server):
|
||||||
live_server_setup(live_server)
|
live_server_setup(live_server)
|
||||||
|
|
||||||
api_key = extract_api_key_from_UI(client)
|
api_key = _extract_api_key_from_UI(client)
|
||||||
|
|
||||||
# Create a watch
|
# Create a watch
|
||||||
set_original_response()
|
set_original_response()
|
||||||
|
|||||||
@@ -7,7 +7,6 @@ from .util import set_original_response, set_modified_response, live_server_setu
|
|||||||
|
|
||||||
sleep_time_for_fetch_thread = 3
|
sleep_time_for_fetch_thread = 3
|
||||||
|
|
||||||
|
|
||||||
# Basic test to check inscriptus is not adding return line chars, basically works etc
|
# Basic test to check inscriptus is not adding return line chars, basically works etc
|
||||||
def test_inscriptus():
|
def test_inscriptus():
|
||||||
from inscriptis import get_text
|
from inscriptis import get_text
|
||||||
@@ -102,7 +101,6 @@ def test_check_basic_change_detection_functionality(client, live_server):
|
|||||||
# It should report nothing found (no new 'unviewed' class)
|
# It should report nothing found (no new 'unviewed' class)
|
||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' not in res.data
|
assert b'unviewed' not in res.data
|
||||||
assert b'Mark all viewed' not in res.data
|
|
||||||
assert b'head title' not in res.data # Should not be present because this is off by default
|
assert b'head title' not in res.data # Should not be present because this is off by default
|
||||||
assert b'test-endpoint' in res.data
|
assert b'test-endpoint' in res.data
|
||||||
|
|
||||||
@@ -111,8 +109,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
|
|||||||
# Enable auto pickup of <title> in settings
|
# Enable auto pickup of <title> in settings
|
||||||
res = client.post(
|
res = client.post(
|
||||||
url_for("settings_page"),
|
url_for("settings_page"),
|
||||||
data={"application-extract_title_as_title": "1", "requests-time_between_check-minutes": 180,
|
data={"application-extract_title_as_title": "1", "requests-time_between_check-minutes": 180, 'application-fetch_backend': "html_requests"},
|
||||||
'application-fetch_backend': "html_requests"},
|
|
||||||
follow_redirects=True
|
follow_redirects=True
|
||||||
)
|
)
|
||||||
|
|
||||||
@@ -121,18 +118,11 @@ def test_check_basic_change_detection_functionality(client, live_server):
|
|||||||
|
|
||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' in res.data
|
assert b'unviewed' in res.data
|
||||||
assert b'Mark all viewed' in res.data
|
|
||||||
|
|
||||||
# It should have picked up the <title>
|
# It should have picked up the <title>
|
||||||
assert b'head title' in res.data
|
assert b'head title' in res.data
|
||||||
|
|
||||||
# hit the mark all viewed link
|
|
||||||
res = client.get(url_for("mark_all_viewed"), follow_redirects=True)
|
|
||||||
|
|
||||||
assert b'Mark all viewed' not in res.data
|
|
||||||
assert b'unviewed' not in res.data
|
|
||||||
|
|
||||||
#
|
#
|
||||||
# Cleanup everything
|
# Cleanup everything
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
||||||
assert b'Deleted' in res.data
|
assert b'Deleted' in res.data
|
||||||
|
|
||||||
|
|||||||
@@ -150,8 +150,9 @@ def test_element_removal_full(client, live_server):
|
|||||||
# Give the thread time to pick it up
|
# Give the thread time to pick it up
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
|
|
||||||
# so that we set the state to 'unviewed' after all the edits
|
# No change yet - first check
|
||||||
client.get(url_for("diff_history_page", uuid="first"))
|
res = client.get(url_for("index"))
|
||||||
|
assert b"unviewed" not in res.data
|
||||||
|
|
||||||
# Make a change to header/footer/nav
|
# Make a change to header/footer/nav
|
||||||
set_modified_response()
|
set_modified_response()
|
||||||
|
|||||||
@@ -1,127 +0,0 @@
|
|||||||
#!/usr/bin/python3
|
|
||||||
|
|
||||||
import time
|
|
||||||
from flask import url_for
|
|
||||||
from .util import live_server_setup
|
|
||||||
|
|
||||||
from ..html_tools import *
|
|
||||||
|
|
||||||
|
|
||||||
def set_original_response():
|
|
||||||
test_return_data = """<html>
|
|
||||||
<body>
|
|
||||||
Some initial text</br>
|
|
||||||
<p>Which is across multiple lines</p>
|
|
||||||
</br>
|
|
||||||
So let's see what happens. </br>
|
|
||||||
<div id="sametext">Some text thats the same</div>
|
|
||||||
<div id="changetext">Some text that will change</div>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
"""
|
|
||||||
|
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
|
||||||
f.write(test_return_data)
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def set_modified_response():
|
|
||||||
test_return_data = """<html>
|
|
||||||
<body>
|
|
||||||
Some initial text</br>
|
|
||||||
<p>which has this one new line</p>
|
|
||||||
</br>
|
|
||||||
So let's see what happens. </br>
|
|
||||||
<div id="sametext">Some text thats the same</div>
|
|
||||||
<div id="changetext">Some text that did change ( 1000 online <br/> 80 guests)</div>
|
|
||||||
</body>
|
|
||||||
</html>
|
|
||||||
"""
|
|
||||||
|
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
|
||||||
f.write(test_return_data)
|
|
||||||
|
|
||||||
return None
|
|
||||||
|
|
||||||
|
|
||||||
def test_check_filter_and_regex_extract(client, live_server):
|
|
||||||
sleep_time_for_fetch_thread = 3
|
|
||||||
|
|
||||||
live_server_setup(live_server)
|
|
||||||
css_filter = "#changetext"
|
|
||||||
|
|
||||||
set_original_response()
|
|
||||||
|
|
||||||
# Give the endpoint time to spin up
|
|
||||||
time.sleep(1)
|
|
||||||
|
|
||||||
# Add our URL to the import page
|
|
||||||
test_url = url_for('test_endpoint', _external=True)
|
|
||||||
res = client.post(
|
|
||||||
url_for("import_page"),
|
|
||||||
data={"urls": test_url},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
assert b"1 Imported" in res.data
|
|
||||||
|
|
||||||
# Trigger a check
|
|
||||||
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
|
||||||
|
|
||||||
# Give the thread time to pick it up
|
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
|
|
||||||
# Goto the edit page, add our ignore text
|
|
||||||
# Add our URL to the import page
|
|
||||||
res = client.post(
|
|
||||||
url_for("edit_page", uuid="first"),
|
|
||||||
data={"css_filter": css_filter,
|
|
||||||
'extract_text': '\d+ online\n\d+ guests',
|
|
||||||
"url": test_url,
|
|
||||||
"tag": "",
|
|
||||||
"headers": "",
|
|
||||||
'fetch_backend': "html_requests"
|
|
||||||
},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
|
|
||||||
assert b"Updated watch." in res.data
|
|
||||||
|
|
||||||
# Check it saved
|
|
||||||
res = client.get(
|
|
||||||
url_for("edit_page", uuid="first"),
|
|
||||||
)
|
|
||||||
assert b'\d+ online' in res.data
|
|
||||||
|
|
||||||
# Trigger a check
|
|
||||||
# client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
|
||||||
|
|
||||||
# Give the thread time to pick it up
|
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
|
|
||||||
# Make a change
|
|
||||||
set_modified_response()
|
|
||||||
|
|
||||||
# Trigger a check
|
|
||||||
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
|
||||||
# Give the thread time to pick it up
|
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
|
|
||||||
# It should have 'unviewed' still
|
|
||||||
# Because it should be looking at only that 'sametext' id
|
|
||||||
res = client.get(url_for("index"))
|
|
||||||
assert b'unviewed' in res.data
|
|
||||||
|
|
||||||
# Check HTML conversion detected and workd
|
|
||||||
res = client.get(
|
|
||||||
url_for("preview_page", uuid="first"),
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
|
|
||||||
# Class will be blank for now because the frontend didnt apply the diff
|
|
||||||
assert b'<div class="">1000 online' in res.data
|
|
||||||
|
|
||||||
# Both regexs should be here
|
|
||||||
assert b'<div class="">80 guests' in res.data
|
|
||||||
|
|
||||||
# Should not be here
|
|
||||||
assert b'Some text that did change' not in res.data
|
|
||||||
@@ -1,84 +0,0 @@
|
|||||||
#!/usr/bin/python3
|
|
||||||
|
|
||||||
import time
|
|
||||||
import os
|
|
||||||
import json
|
|
||||||
import logging
|
|
||||||
from flask import url_for
|
|
||||||
from .util import live_server_setup
|
|
||||||
from urllib.parse import urlparse, parse_qs
|
|
||||||
|
|
||||||
def test_consistent_history(client, live_server):
|
|
||||||
live_server_setup(live_server)
|
|
||||||
|
|
||||||
# Give the endpoint time to spin up
|
|
||||||
time.sleep(1)
|
|
||||||
r = range(1, 50)
|
|
||||||
|
|
||||||
for one in r:
|
|
||||||
test_url = url_for('test_endpoint', content_type="text/html", content=str(one), _external=True)
|
|
||||||
res = client.post(
|
|
||||||
url_for("import_page"),
|
|
||||||
data={"urls": test_url},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
|
|
||||||
assert b"1 Imported" in res.data
|
|
||||||
|
|
||||||
time.sleep(3)
|
|
||||||
while True:
|
|
||||||
res = client.get(url_for("index"))
|
|
||||||
logging.debug("Waiting for 'Checking now' to go away..")
|
|
||||||
if b'Checking now' not in res.data:
|
|
||||||
break
|
|
||||||
time.sleep(0.5)
|
|
||||||
|
|
||||||
time.sleep(3)
|
|
||||||
# Essentially just triggers the DB write/update
|
|
||||||
res = client.post(
|
|
||||||
url_for("settings_page"),
|
|
||||||
data={"application-empty_pages_are_a_change": "",
|
|
||||||
"requests-time_between_check-minutes": 180,
|
|
||||||
'application-fetch_backend': "html_requests"},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
assert b"Settings updated." in res.data
|
|
||||||
|
|
||||||
# Give it time to write it out
|
|
||||||
time.sleep(3)
|
|
||||||
json_db_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, 'url-watches.json')
|
|
||||||
|
|
||||||
json_obj = None
|
|
||||||
with open(json_db_file, 'r') as f:
|
|
||||||
json_obj = json.load(f)
|
|
||||||
|
|
||||||
# assert the right amount of watches was found in the JSON
|
|
||||||
assert len(json_obj['watching']) == len(r), "Correct number of watches was found in the JSON"
|
|
||||||
|
|
||||||
# each one should have a history.txt containing just one line
|
|
||||||
for w in json_obj['watching'].keys():
|
|
||||||
history_txt_index_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, 'history.txt')
|
|
||||||
assert os.path.isfile(history_txt_index_file), "History.txt should exist where I expect it - {}".format(history_txt_index_file)
|
|
||||||
|
|
||||||
# Same like in model.Watch
|
|
||||||
with open(history_txt_index_file, "r") as f:
|
|
||||||
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
|
|
||||||
assert len(tmp_history) == 1, "History.txt should contain 1 line"
|
|
||||||
|
|
||||||
# Should be two files,. the history.txt , and the snapshot.txt
|
|
||||||
files_in_watch_dir = os.listdir(os.path.join(live_server.app.config['DATASTORE'].datastore_path,
|
|
||||||
w))
|
|
||||||
# Find the snapshot one
|
|
||||||
for fname in files_in_watch_dir:
|
|
||||||
if fname != 'history.txt':
|
|
||||||
# contents should match what we requested as content returned from the test url
|
|
||||||
with open(os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, fname), 'r') as snapshot_f:
|
|
||||||
contents = snapshot_f.read()
|
|
||||||
watch_url = json_obj['watching'][w]['url']
|
|
||||||
u = urlparse(watch_url)
|
|
||||||
q = parse_qs(u[4])
|
|
||||||
assert q['content'][0] == contents.strip(), "Snapshot file {} should contain {}".format(fname, q['content'][0])
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
assert len(files_in_watch_dir) == 2, "Should be just two files in the dir, history.txt and the snapshot"
|
|
||||||
@@ -154,10 +154,6 @@ def test_check_notification(client, live_server):
|
|||||||
time.sleep(1)
|
time.sleep(1)
|
||||||
assert os.path.exists("test-datastore/notification.txt") == False
|
assert os.path.exists("test-datastore/notification.txt") == False
|
||||||
|
|
||||||
res = client.get(url_for("notification_logs"))
|
|
||||||
# be sure we see it in the output log
|
|
||||||
assert b'New ChangeDetection.io Notification - ' + test_url.encode('utf-8') in res.data
|
|
||||||
|
|
||||||
# cleanup for the next
|
# cleanup for the next
|
||||||
client.get(
|
client.get(
|
||||||
url_for("form_delete", uuid="all"),
|
url_for("form_delete", uuid="all"),
|
||||||
|
|||||||
@@ -43,7 +43,7 @@ def set_modified_with_trigger_text_response():
|
|||||||
Some NEW nice initial text</br>
|
Some NEW nice initial text</br>
|
||||||
<p>Which is across multiple lines</p>
|
<p>Which is across multiple lines</p>
|
||||||
</br>
|
</br>
|
||||||
Add to cart
|
foobar123
|
||||||
<br/>
|
<br/>
|
||||||
So let's see what happens. </br>
|
So let's see what happens. </br>
|
||||||
</body>
|
</body>
|
||||||
@@ -60,7 +60,7 @@ def test_trigger_functionality(client, live_server):
|
|||||||
live_server_setup(live_server)
|
live_server_setup(live_server)
|
||||||
|
|
||||||
sleep_time_for_fetch_thread = 3
|
sleep_time_for_fetch_thread = 3
|
||||||
trigger_text = "Add to cart"
|
trigger_text = "foobar123"
|
||||||
set_original_ignore_response()
|
set_original_ignore_response()
|
||||||
|
|
||||||
# Give the endpoint time to spin up
|
# Give the endpoint time to spin up
|
||||||
@@ -78,6 +78,9 @@ def test_trigger_functionality(client, live_server):
|
|||||||
# Trigger a check
|
# Trigger a check
|
||||||
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
||||||
|
|
||||||
|
# Give the thread time to pick it up
|
||||||
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
|
|
||||||
# Goto the edit page, add our ignore text
|
# Goto the edit page, add our ignore text
|
||||||
# Add our URL to the import page
|
# Add our URL to the import page
|
||||||
res = client.post(
|
res = client.post(
|
||||||
@@ -95,12 +98,6 @@ def test_trigger_functionality(client, live_server):
|
|||||||
)
|
)
|
||||||
assert bytes(trigger_text.encode('utf-8')) in res.data
|
assert bytes(trigger_text.encode('utf-8')) in res.data
|
||||||
|
|
||||||
# Give the thread time to pick it up
|
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
|
|
||||||
# so that we set the state to 'unviewed' after all the edits
|
|
||||||
client.get(url_for("diff_history_page", uuid="first"))
|
|
||||||
|
|
||||||
# Trigger a check
|
# Trigger a check
|
||||||
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
||||||
|
|
||||||
@@ -124,7 +121,7 @@ def test_trigger_functionality(client, live_server):
|
|||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' not in res.data
|
assert b'unviewed' not in res.data
|
||||||
|
|
||||||
# Now set the content which contains the trigger text
|
# Just to be sure.. set a regular modified change..
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
set_modified_with_trigger_text_response()
|
set_modified_with_trigger_text_response()
|
||||||
|
|
||||||
@@ -133,13 +130,7 @@ def test_trigger_functionality(client, live_server):
|
|||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' in res.data
|
assert b'unviewed' in res.data
|
||||||
|
|
||||||
# https://github.com/dgtlmoon/changedetection.io/issues/616
|
|
||||||
# Apparently the actual snapshot that contains the trigger never shows
|
|
||||||
res = client.get(url_for("diff_history_page", uuid="first"))
|
|
||||||
assert b'Add to cart' in res.data
|
|
||||||
|
|
||||||
# Check the preview/highlighter, we should be able to see what we triggered on, but it should be highlighted
|
# Check the preview/highlighter, we should be able to see what we triggered on, but it should be highlighted
|
||||||
res = client.get(url_for("preview_page", uuid="first"))
|
res = client.get(url_for("preview_page", uuid="first"))
|
||||||
|
# We should be able to see what we ignored
|
||||||
# We should be able to see what we triggered on
|
assert b'<div class="triggered">foobar' in res.data
|
||||||
assert b'<div class="triggered">Add to cart' in res.data
|
|
||||||
@@ -42,6 +42,9 @@ def test_trigger_regex_functionality(client, live_server):
|
|||||||
)
|
)
|
||||||
assert b"1 Imported" in res.data
|
assert b"1 Imported" in res.data
|
||||||
|
|
||||||
|
# Trigger a check
|
||||||
|
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
||||||
|
|
||||||
# Give the thread time to pick it up
|
# Give the thread time to pick it up
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
|
|
||||||
@@ -57,9 +60,7 @@ def test_trigger_regex_functionality(client, live_server):
|
|||||||
"fetch_backend": "html_requests"},
|
"fetch_backend": "html_requests"},
|
||||||
follow_redirects=True
|
follow_redirects=True
|
||||||
)
|
)
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
# so that we set the state to 'unviewed' after all the edits
|
|
||||||
client.get(url_for("diff_history_page", uuid="first"))
|
|
||||||
|
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
with open("test-datastore/endpoint-content.txt", "w") as f:
|
||||||
f.write("some new noise")
|
f.write("some new noise")
|
||||||
@@ -78,7 +79,3 @@ def test_trigger_regex_functionality(client, live_server):
|
|||||||
time.sleep(sleep_time_for_fetch_thread)
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' in res.data
|
assert b'unviewed' in res.data
|
||||||
|
|
||||||
# Cleanup everything
|
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
|
||||||
assert b'Deleted' in res.data
|
|
||||||
@@ -22,9 +22,10 @@ def set_original_ignore_response():
|
|||||||
|
|
||||||
|
|
||||||
|
|
||||||
def test_trigger_regex_functionality_with_filter(client, live_server):
|
def test_trigger_regex_functionality(client, live_server):
|
||||||
|
|
||||||
live_server_setup(live_server)
|
live_server_setup(live_server)
|
||||||
|
|
||||||
sleep_time_for_fetch_thread = 3
|
sleep_time_for_fetch_thread = 3
|
||||||
|
|
||||||
set_original_ignore_response()
|
set_original_ignore_response()
|
||||||
@@ -41,24 +42,26 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
|
|||||||
)
|
)
|
||||||
assert b"1 Imported" in res.data
|
assert b"1 Imported" in res.data
|
||||||
|
|
||||||
# it needs time to save the original version
|
# Trigger a check
|
||||||
|
client.get(url_for("form_watch_checknow"), follow_redirects=True)
|
||||||
|
|
||||||
|
# Give the thread time to pick it up
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
time.sleep(sleep_time_for_fetch_thread)
|
||||||
|
|
||||||
|
# It should report nothing found (just a new one shouldnt have anything)
|
||||||
|
res = client.get(url_for("index"))
|
||||||
|
assert b'unviewed' not in res.data
|
||||||
|
|
||||||
### test regex with filter
|
### test regex with filter
|
||||||
res = client.post(
|
res = client.post(
|
||||||
url_for("edit_page", uuid="first"),
|
url_for("edit_page", uuid="first"),
|
||||||
data={"trigger_text": "/cool.stuff/",
|
data={"trigger_text": "/cool.stuff\d/",
|
||||||
"url": test_url,
|
"url": test_url,
|
||||||
"css_filter": '#in-here',
|
"css_filter": '#in-here',
|
||||||
"fetch_backend": "html_requests"},
|
"fetch_backend": "html_requests"},
|
||||||
follow_redirects=True
|
follow_redirects=True
|
||||||
)
|
)
|
||||||
|
|
||||||
# Give the thread time to pick it up
|
|
||||||
time.sleep(sleep_time_for_fetch_thread)
|
|
||||||
|
|
||||||
client.get(url_for("diff_history_page", uuid="first"))
|
|
||||||
|
|
||||||
# Check that we have the expected text.. but it's not in the css filter we want
|
# Check that we have the expected text.. but it's not in the css filter we want
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
with open("test-datastore/endpoint-content.txt", "w") as f:
|
||||||
f.write("<html>some new noise with cool stuff2 ok</html>")
|
f.write("<html>some new noise with cool stuff2 ok</html>")
|
||||||
@@ -70,7 +73,6 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
|
|||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' not in res.data
|
assert b'unviewed' not in res.data
|
||||||
|
|
||||||
# now this should trigger something
|
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
with open("test-datastore/endpoint-content.txt", "w") as f:
|
||||||
f.write("<html>some new noise with <span id=in-here>cool stuff6</span> ok</html>")
|
f.write("<html>some new noise with <span id=in-here>cool stuff6</span> ok</html>")
|
||||||
|
|
||||||
@@ -79,6 +81,4 @@ def test_trigger_regex_functionality_with_filter(client, live_server):
|
|||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' in res.data
|
assert b'unviewed' in res.data
|
||||||
|
|
||||||
# Cleanup everything
|
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
|
||||||
assert b'Deleted' in res.data
|
|
||||||
|
|||||||
@@ -44,61 +44,6 @@ def set_modified_response():
|
|||||||
|
|
||||||
return None
|
return None
|
||||||
|
|
||||||
# Handle utf-8 charset replies https://github.com/dgtlmoon/changedetection.io/pull/613
|
|
||||||
def test_check_xpath_filter_utf8(client, live_server):
|
|
||||||
filter='//item/*[self::description]'
|
|
||||||
|
|
||||||
d='''<?xml version="1.0" encoding="UTF-8"?>
|
|
||||||
<rss xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:dc="http://purl.org/dc/elements/1.1/" version="2.0">
|
|
||||||
<channel>
|
|
||||||
<title>rpilocator.com</title>
|
|
||||||
<link>https://rpilocator.com</link>
|
|
||||||
<description>Find Raspberry Pi Computers in Stock</description>
|
|
||||||
<lastBuildDate>Thu, 19 May 2022 23:27:30 GMT</lastBuildDate>
|
|
||||||

|
|
||||||
<item>
|
|
||||||
<title>Stock Alert (UK): RPi CM4 - 1GB RAM, No MMC, No Wifi is In Stock at Pimoroni</title>
|
|
||||||
<description>Stock Alert (UK): RPi CM4 - 1GB RAM, No MMC, No Wifi is In Stock at Pimoroni</description>
|
|
||||||
<link>https://rpilocator.com?vendor=pimoroni&utm_source=feed&utm_medium=rss</link>
|
|
||||||
<category>pimoroni</category>
|
|
||||||
<category>UK</category>
|
|
||||||
<category>CM4</category>
|
|
||||||
<guid isPermaLink="false">F9FAB0D9-DF6F-40C8-8DEE5FC0646BB722</guid>
|
|
||||||
<pubDate>Thu, 19 May 2022 14:32:32 GMT</pubDate>
|
|
||||||
</item>
|
|
||||||
</channel>
|
|
||||||
</rss>'''
|
|
||||||
|
|
||||||
with open("test-datastore/endpoint-content.txt", "w") as f:
|
|
||||||
f.write(d)
|
|
||||||
|
|
||||||
# Add our URL to the import page
|
|
||||||
test_url = url_for('test_endpoint', _external=True, content_type="application/rss+xml;charset=UTF-8")
|
|
||||||
res = client.post(
|
|
||||||
url_for("import_page"),
|
|
||||||
data={"urls": test_url},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
assert b"1 Imported" in res.data
|
|
||||||
res = client.post(
|
|
||||||
url_for("edit_page", uuid="first"),
|
|
||||||
data={"css_filter": filter, "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
|
|
||||||
follow_redirects=True
|
|
||||||
)
|
|
||||||
assert b"Updated watch." in res.data
|
|
||||||
time.sleep(3)
|
|
||||||
res = client.get(url_for("index"))
|
|
||||||
assert b'Unicode strings with encoding declaration are not supported.' not in res.data
|
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
|
||||||
assert b'Deleted' in res.data
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
def test_check_markup_xpath_filter_restriction(client, live_server):
|
def test_check_markup_xpath_filter_restriction(client, live_server):
|
||||||
sleep_time_for_fetch_thread = 3
|
sleep_time_for_fetch_thread = 3
|
||||||
@@ -150,8 +95,6 @@ def test_check_markup_xpath_filter_restriction(client, live_server):
|
|||||||
|
|
||||||
res = client.get(url_for("index"))
|
res = client.get(url_for("index"))
|
||||||
assert b'unviewed' not in res.data
|
assert b'unviewed' not in res.data
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
|
||||||
assert b'Deleted' in res.data
|
|
||||||
|
|
||||||
|
|
||||||
def test_xpath_validation(client, live_server):
|
def test_xpath_validation(client, live_server):
|
||||||
@@ -174,8 +117,6 @@ def test_xpath_validation(client, live_server):
|
|||||||
follow_redirects=True
|
follow_redirects=True
|
||||||
)
|
)
|
||||||
assert b"is not a valid XPath expression" in res.data
|
assert b"is not a valid XPath expression" in res.data
|
||||||
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
|
|
||||||
assert b'Deleted' in res.data
|
|
||||||
|
|
||||||
|
|
||||||
# actually only really used by the distll.io importer, but could be handy too
|
# actually only really used by the distll.io importer, but could be handy too
|
||||||
@@ -212,6 +153,8 @@ def test_check_with_prefix_css_filter(client, live_server):
|
|||||||
follow_redirects=True
|
follow_redirects=True
|
||||||
)
|
)
|
||||||
|
|
||||||
|
with open('/tmp/fuck.html', 'wb') as f:
|
||||||
|
f.write(res.data)
|
||||||
assert b"Some text thats the same" in res.data #in selector
|
assert b"Some text thats the same" in res.data #in selector
|
||||||
assert b"Some text that will change" not in res.data #not in selector
|
assert b"Some text that will change" not in res.data #not in selector
|
||||||
|
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
#!/usr/bin/python3
|
#!/usr/bin/python3
|
||||||
|
|
||||||
from flask import make_response, request
|
from flask import make_response, request
|
||||||
from flask import url_for
|
|
||||||
|
|
||||||
def set_original_response():
|
def set_original_response():
|
||||||
test_return_data = """<html>
|
test_return_data = """<html>
|
||||||
@@ -56,32 +55,14 @@ def set_more_modified_response():
|
|||||||
return None
|
return None
|
||||||
|
|
||||||
|
|
||||||
# kinda funky, but works for now
|
|
||||||
def extract_api_key_from_UI(client):
|
|
||||||
import re
|
|
||||||
res = client.get(
|
|
||||||
url_for("settings_page"),
|
|
||||||
)
|
|
||||||
# <span id="api-key">{{api_key}}</span>
|
|
||||||
|
|
||||||
m = re.search('<span id="api-key">(.+?)</span>', str(res.data))
|
|
||||||
api_key = m.group(1)
|
|
||||||
return api_key.strip()
|
|
||||||
|
|
||||||
def live_server_setup(live_server):
|
def live_server_setup(live_server):
|
||||||
|
|
||||||
@live_server.app.route('/test-endpoint')
|
@live_server.app.route('/test-endpoint')
|
||||||
def test_endpoint():
|
def test_endpoint():
|
||||||
ctype = request.args.get('content_type')
|
ctype = request.args.get('content_type')
|
||||||
status_code = request.args.get('status_code')
|
status_code = request.args.get('status_code')
|
||||||
content = request.args.get('content') or None
|
|
||||||
|
|
||||||
try:
|
try:
|
||||||
if content is not None:
|
|
||||||
resp = make_response(content, status_code)
|
|
||||||
resp.headers['Content-Type'] = ctype if ctype else 'text/html'
|
|
||||||
return resp
|
|
||||||
|
|
||||||
# Tried using a global var here but didn't seem to work, so reading from a file instead.
|
# Tried using a global var here but didn't seem to work, so reading from a file instead.
|
||||||
with open("test-datastore/endpoint-content.txt", "r") as f:
|
with open("test-datastore/endpoint-content.txt", "r") as f:
|
||||||
resp = make_response(f.read(), status_code)
|
resp = make_response(f.read(), status_code)
|
||||||
|
|||||||
@@ -40,11 +40,10 @@ class update_worker(threading.Thread):
|
|||||||
contents = ""
|
contents = ""
|
||||||
screenshot = False
|
screenshot = False
|
||||||
update_obj= {}
|
update_obj= {}
|
||||||
xpath_data = False
|
|
||||||
now = time.time()
|
now = time.time()
|
||||||
|
|
||||||
try:
|
try:
|
||||||
changed_detected, update_obj, contents, screenshot, xpath_data = update_handler.run(uuid)
|
changed_detected, update_obj, contents, screenshot = update_handler.run(uuid)
|
||||||
|
|
||||||
# Re #342
|
# Re #342
|
||||||
# In Python 3, all strings are sequences of Unicode characters. There is a bytes type that holds raw bytes.
|
# In Python 3, all strings are sequences of Unicode characters. There is a bytes type that holds raw bytes.
|
||||||
@@ -56,25 +55,12 @@ class update_worker(threading.Thread):
|
|||||||
except content_fetcher.ReplyWithContentButNoText as e:
|
except content_fetcher.ReplyWithContentButNoText as e:
|
||||||
# Totally fine, it's by choice - just continue on, nothing more to care about
|
# Totally fine, it's by choice - just continue on, nothing more to care about
|
||||||
# Page had elements/content but no renderable text
|
# Page had elements/content but no renderable text
|
||||||
if self.datastore.data['watching'][uuid].get('css_filter'):
|
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': "Got HTML content but no text found (CSS / xPath Filter not found in page?)"})
|
|
||||||
else:
|
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': "Got HTML content but no text found."})
|
|
||||||
pass
|
pass
|
||||||
except content_fetcher.EmptyReply as e:
|
except content_fetcher.EmptyReply as e:
|
||||||
# Some kind of custom to-str handler in the exception handler that does this?
|
# Some kind of custom to-str handler in the exception handler that does this?
|
||||||
err_text = "EmptyReply - try increasing 'Wait seconds before extracting text', Status Code {}".format(e.status_code)
|
err_text = "EmptyReply: Status Code {}".format(e.status_code)
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
|
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
|
||||||
'last_check_status': e.status_code})
|
'last_check_status': e.status_code})
|
||||||
except content_fetcher.ScreenshotUnavailable as e:
|
|
||||||
err_text = "Screenshot unavailable, page did not render fully in the expected time - try increasing 'Wait seconds before extracting text'"
|
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
|
|
||||||
'last_check_status': e.status_code})
|
|
||||||
except content_fetcher.PageUnloadable as e:
|
|
||||||
err_text = "Page request from server didnt respond correctly"
|
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': err_text,
|
|
||||||
'last_check_status': e.status_code})
|
|
||||||
|
|
||||||
except Exception as e:
|
except Exception as e:
|
||||||
self.app.logger.error("Exception reached processing watch UUID: %s - %s", uuid, str(e))
|
self.app.logger.error("Exception reached processing watch UUID: %s - %s", uuid, str(e))
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': str(e)})
|
self.datastore.update_watch(uuid=uuid, update_obj={'last_error': str(e)})
|
||||||
@@ -87,7 +73,9 @@ class update_worker(threading.Thread):
|
|||||||
# For the FIRST time we check a site, or a change detected, save the snapshot.
|
# For the FIRST time we check a site, or a change detected, save the snapshot.
|
||||||
if changed_detected or not watch['last_checked']:
|
if changed_detected or not watch['last_checked']:
|
||||||
# A change was detected
|
# A change was detected
|
||||||
fname = watch.save_history_text(contents=contents, timestamp=str(round(time.time())))
|
fname = self.datastore.save_history_text(watch_uuid=uuid, contents=contents)
|
||||||
|
# Should always be keyed by string(timestamp)
|
||||||
|
self.datastore.update_watch(uuid, {"history": {str(round(time.time())): fname}})
|
||||||
|
|
||||||
# Generally update anything interesting returned
|
# Generally update anything interesting returned
|
||||||
self.datastore.update_watch(uuid=uuid, update_obj=update_obj)
|
self.datastore.update_watch(uuid=uuid, update_obj=update_obj)
|
||||||
@@ -98,10 +86,16 @@ class update_worker(threading.Thread):
|
|||||||
print (">> Change detected in UUID {} - {}".format(uuid, watch['url']))
|
print (">> Change detected in UUID {} - {}".format(uuid, watch['url']))
|
||||||
|
|
||||||
# Notifications should only trigger on the second time (first time, we gather the initial snapshot)
|
# Notifications should only trigger on the second time (first time, we gather the initial snapshot)
|
||||||
if watch.history_n >= 2:
|
if len(watch['history']) > 1:
|
||||||
|
|
||||||
dates = list(watch.history.keys())
|
dates = list(watch['history'].keys())
|
||||||
prev_fname = watch.history[dates[-2]]
|
# Convert to int, sort and back to str again
|
||||||
|
# @todo replace datastore getter that does this automatically
|
||||||
|
dates = [int(i) for i in dates]
|
||||||
|
dates.sort(reverse=True)
|
||||||
|
dates = [str(i) for i in dates]
|
||||||
|
|
||||||
|
prev_fname = watch['history'][dates[1]]
|
||||||
|
|
||||||
|
|
||||||
# Did it have any notification alerts to hit?
|
# Did it have any notification alerts to hit?
|
||||||
@@ -154,9 +148,6 @@ class update_worker(threading.Thread):
|
|||||||
# Always save the screenshot if it's available
|
# Always save the screenshot if it's available
|
||||||
if screenshot:
|
if screenshot:
|
||||||
self.datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot)
|
self.datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot)
|
||||||
if xpath_data:
|
|
||||||
self.datastore.save_xpath_data(watch_uuid=uuid, data=xpath_data)
|
|
||||||
|
|
||||||
|
|
||||||
self.current_uuid = None # Done
|
self.current_uuid = None # Done
|
||||||
self.q.task_done()
|
self.q.task_done()
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 20 KiB |
|
Before Width: | Height: | Size: 22 KiB |
|
Before Width: | Height: | Size: 238 KiB |
@@ -18,7 +18,7 @@ wtforms ~= 3.0
|
|||||||
jsonpath-ng ~= 1.5.3
|
jsonpath-ng ~= 1.5.3
|
||||||
|
|
||||||
# Notification library
|
# Notification library
|
||||||
apprise ~= 0.9.9
|
apprise ~= 0.9.8.3
|
||||||
|
|
||||||
# apprise mqtt https://github.com/dgtlmoon/changedetection.io/issues/315
|
# apprise mqtt https://github.com/dgtlmoon/changedetection.io/issues/315
|
||||||
paho-mqtt
|
paho-mqtt
|
||||||
|
|||||||
|
Before Width: | Height: | Size: 115 KiB After Width: | Height: | Size: 115 KiB |
|
Before Width: | Height: | Size: 27 KiB After Width: | Height: | Size: 27 KiB |
|
Before Width: | Height: | Size: 190 KiB After Width: | Height: | Size: 190 KiB |