Compare commits

..

21 Commits

Author SHA1 Message Date
dgtlmoon
ac1b5ee6f7 Update wiki link for 'More info' about sharing a watch and its configuration 2022-05-17 22:43:56 +02:00
dgtlmoon
aca6c753bf skip JSON 2022-05-17 22:12:37 +02:00
dgtlmoon
efae86c134 small cleanups 2022-05-17 22:06:20 +02:00
dgtlmoon
0f72471343 Option to control if pages with no renderable content are a change (like JS webapps that dont render any text sometimes etc) 2022-05-17 22:03:18 +02:00
dgtlmoon
16809b48f8 Playwright - raise EmptyReply on empty reply, no need to process further 2022-05-17 18:40:15 +02:00
dgtlmoon
67c833d2bc Re #214 - configurable wait extra seconds for webdriver requests before extracting text (#606) 2022-05-17 18:35:33 +02:00
weeix
31fea55ee4 Fix PLAYWRIGHT_DRIVER_URL default value (cf. #587) (#599) 2022-05-14 22:34:44 +02:00
dgtlmoon
b6c50d3b1a Update PIP readme.md 2022-05-10 22:46:59 +02:00
dgtlmoon
034507f14f Fixing Pip install problem - Update MANIFEST to include model/ subdir, improving imports (#593) 2022-05-10 22:45:08 +02:00
dgtlmoon
0e385b1c22 0.39.13 2022-05-10 17:24:38 +02:00
dgtlmoon
f28c260576 Distill.io JSON export file importer (#592) 2022-05-10 17:15:41 +02:00
dgtlmoon
18f0b63b7d Ability to specify a list of proxies to choose from, always using the first one by default, See wiki (#591) 2022-05-08 20:35:36 +02:00
Thilo-Alexander Ginkel
97045e7a7b Improving Playwright docs (#588) 2022-05-07 22:23:17 +02:00
dgtlmoon
9807cf0cda Playwright - code fix 2022-05-07 17:29:59 +02:00
dgtlmoon
d4b5237103 Playwright fetcher - more reliable by just waiting arbitrary seconds after the last network IO 2022-05-07 17:14:40 +02:00
dgtlmoon
dc6f76ba64 Make proxy configuration more consistent - see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration (#585) 2022-05-07 16:37:56 +02:00
dgtlmoon
1f2f93184e Playwright fetcher - use the correct default User-Agent 2022-05-06 23:59:38 +02:00
dgtlmoon
0f08c8dda3 Toggle visibility of extra requests options/settings when not in use (#584) 2022-05-06 23:40:32 +02:00
dgtlmoon
68db20168e Add new fetch method: Playwright Chromium (Selenium/WebDriver alternative) (#489) 2022-05-02 21:40:40 +02:00
dgtlmoon
1d4474f5a3 Simplify scrub operation (simply cleans all) (#575) 2022-05-02 21:10:23 +02:00
dgtlmoon
613308881c Bugfix - dont update record when deleted during check 2022-05-01 21:41:29 +02:00
29 changed files with 921 additions and 243 deletions

View File

@@ -20,6 +20,11 @@ COPY requirements.txt /requirements.txt
RUN pip install --target=/dependencies -r /requirements.txt
# Playwright is an alternative to Selenium
# Excluded this package from requirements.txt to prevent arm/v6 and arm/v7 builds from failing
RUN pip install --target=/dependencies playwright~=1.20 \
|| echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled."
# Final image stage
FROM python:3.8-slim

View File

@@ -1,5 +1,6 @@
recursive-include changedetectionio/templates *
recursive-include changedetectionio/static *
recursive-include changedetectionio/model *
include changedetection.py
global-exclude *.pyc
global-exclude node_modules

View File

@@ -16,6 +16,13 @@ Live your data-life *pro-actively* instead of *re-actively*, do not rely on mani
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />
**Get your own private instance now! Let us host it for you!**
[**Try our $6.99/month subscription - unlimited checks, watches and notifications!**](https://lemonade.changedetection.io/start), choose from different geographical locations, let us handle everything for you.
#### Example use cases
Know when ...
@@ -58,14 +65,3 @@ Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
See https://github.com/dgtlmoon/changedetection.io for more information.
### Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Please support us, even small amounts help a LOT.
BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/btc-support.png" style="max-width:50%;" alt="Support us!" />

View File

@@ -40,7 +40,7 @@ from flask_wtf import CSRFProtect
from changedetectionio import html_tools
__version__ = '0.39.12'
__version__ = '0.39.13.1'
datastore = None
@@ -434,48 +434,21 @@ def changedetection_app(config=None, datastore_o=None):
@login_required
def scrub_page():
import re
if request.method == 'POST':
confirmtext = request.form.get('confirmtext')
limit_date = request.form.get('limit_date')
limit_timestamp = 0
# Re #149 - allow empty/0 timestamp limit
if len(limit_date):
try:
limit_date = limit_date.replace('T', ' ')
# I noticed chrome will show '/' but actually submit '-'
limit_date = limit_date.replace('-', '/')
# In the case that :ss seconds are supplied
limit_date = re.sub(r'(\d\d:\d\d)(:\d\d)', '\\1', limit_date)
str_to_dt = datetime.datetime.strptime(limit_date, '%Y/%m/%d %H:%M')
limit_timestamp = int(str_to_dt.timestamp())
if limit_timestamp > time.time():
flash("Timestamp is in the future, cannot continue.", 'error')
return redirect(url_for('scrub_page'))
except ValueError:
flash('Incorrect date format, cannot continue.', 'error')
return redirect(url_for('scrub_page'))
if confirmtext == 'scrub':
changes_removed = 0
for uuid, watch in datastore.data['watching'].items():
if limit_timestamp:
changes_removed += datastore.scrub_watch(uuid, limit_timestamp=limit_timestamp)
else:
changes_removed += datastore.scrub_watch(uuid)
for uuid in datastore.data['watching'].keys():
datastore.scrub_watch(uuid)
flash("Cleared snapshot history ({} snapshots removed)".format(changes_removed))
flash("Cleared all snapshot history")
else:
flash('Incorrect confirmation text.', 'error')
return redirect(url_for('index'))
output = render_template("scrub.html")
output = render_template("scrub.html")
return output
@@ -545,10 +518,31 @@ def changedetection_app(config=None, datastore_o=None):
if all(value == 0 or value == None for value in datastore.data['watching'][uuid]['time_between_check'].values()):
default['time_between_check'] = deepcopy(datastore.data['settings']['requests']['time_between_check'])
form = forms.watchForm(formdata=request.form if request.method == 'POST' else None,
data=default
)
# Defaults for proxy choice
if datastore.proxy_list is not None: # When enabled
system_proxy = datastore.data['settings']['requests']['proxy']
if default['proxy'] is None:
default['proxy'] = system_proxy
else:
# Does the chosen one exist?
if not any(default['proxy'] in tup for tup in datastore.proxy_list):
default['proxy'] = datastore.proxy_list[0][0]
# Used by the form handler to keep or remove the proxy settings
default['proxy_list'] = datastore.proxy_list
# proxy_override set to the json/text list of the items
form = forms.watchForm(formdata=request.form if request.method == 'POST' else None,
data=default,
)
if datastore.proxy_list is None:
# @todo - Couldn't get setattr() etc dynamic addition working, so remove it instead
del form.proxy
else:
form.proxy.choices = datastore.proxy_list
if default['proxy'] is None:
form.proxy.default='http://hello'
if request.method == 'POST' and form.validate():
extra_update_obj = {}
@@ -611,12 +605,12 @@ def changedetection_app(config=None, datastore_o=None):
if request.method == 'POST' and not form.validate():
flash("An error occurred, please see below.", "error")
output = render_template("edit.html",
uuid=uuid,
watch=datastore.data['watching'][uuid],
form=form,
has_empty_checktime=using_default_check_time,
using_global_webdriver_wait=default['webdriver_delay'] is None,
current_base_url=datastore.data['settings']['application']['base_url'],
emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False)
)
@@ -628,10 +622,28 @@ def changedetection_app(config=None, datastore_o=None):
def settings_page():
from changedetectionio import content_fetcher, forms
default = deepcopy(datastore.data['settings'])
if datastore.proxy_list is not None:
# When enabled
system_proxy = datastore.data['settings']['requests']['proxy']
# In the case it doesnt exist anymore
if not any([system_proxy in tup for tup in datastore.proxy_list]):
system_proxy = None
default['requests']['proxy'] = system_proxy if system_proxy is not None else datastore.proxy_list[0][0]
# Used by the form handler to keep or remove the proxy settings
default['proxy_list'] = datastore.proxy_list
# Don't use form.data on POST so that it doesnt overrid the checkbox status from the POST status
form = forms.globalSettingsForm(formdata=request.form if request.method == 'POST' else None,
data=datastore.data['settings']
data=default
)
if datastore.proxy_list is None:
# @todo - Couldn't get setattr() etc dynamic addition working, so remove it instead
del form.requests.form.proxy
else:
form.requests.form.proxy.choices = datastore.proxy_list
if request.method == 'POST':
# Password unset is a GET, but we can lock the session to a salted env password to always need the password
@@ -671,44 +683,37 @@ def changedetection_app(config=None, datastore_o=None):
@app.route("/import", methods=['GET', "POST"])
@login_required
def import_page():
import validators
remaining_urls = []
good = 0
if request.method == 'POST':
now=time.time()
urls = request.values.get('urls').split("\n")
from .importer import import_url_list, import_distill_io_json
if (len(urls) > 5000):
flash("Importing 5,000 of the first URLs from your list, the rest can be imported again.")
# URL List import
if request.values.get('urls') and len(request.values.get('urls').strip()):
# Import and push into the queue for immediate update check
importer = import_url_list()
importer.run(data=request.values.get('urls'), flash=flash, datastore=datastore)
for uuid in importer.new_uuids:
update_q.put(uuid)
for url in urls:
url = url.strip()
url, *tags = url.split(" ")
# Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server
if len(url) and validators.url(url.replace('source:', '')) and good < 5000:
new_uuid = datastore.add_watch(url=url.strip(), tag=" ".join(tags), write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
update_q.put(new_uuid)
good += 1
continue
if len(importer.remaining_data) == 0:
return redirect(url_for('index'))
else:
remaining_urls = importer.remaining_data
if len(url.strip()):
remaining_urls.append(url)
# Distill.io import
if request.values.get('distill-io') and len(request.values.get('distill-io').strip()):
# Import and push into the queue for immediate update check
d_importer = import_distill_io_json()
d_importer.run(data=request.values.get('distill-io'), flash=flash, datastore=datastore)
for uuid in d_importer.new_uuids:
update_q.put(uuid)
flash("{} Imported in {:.2f}s, {} Skipped.".format(good, time.time()-now,len(remaining_urls)))
datastore.needs_write = True
if len(remaining_urls) == 0:
# Looking good, redirect to index.
return redirect(url_for('index'))
# Could be some remaining, or we could be on GET
output = render_template("import.html",
remaining="\n".join(remaining_urls)
import_url_list_remaining="\n".join(remaining_urls),
original_distill_json=''
)
return output

View File

@@ -8,7 +8,7 @@ import sys
import eventlet
import eventlet.wsgi
from . import store, changedetection_app
from . import store, changedetection_app, content_fetcher
from . import __version__
def main():

View File

@@ -1,14 +1,9 @@
from abc import ABC, abstractmethod
import chardet
import os
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.webdriver.common.proxy import Proxy as SeleniumProxy
from selenium.common.exceptions import WebDriverException
import requests
import time
import urllib3.exceptions
import sys
class EmptyReply(Exception):
def __init__(self, status_code, url):
@@ -16,16 +11,30 @@ class EmptyReply(Exception):
self.status_code = status_code
self.url = url
return
pass
class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class Fetcher():
error = None
status_code = None
content = None
headers = None
# Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False
fetcher_description = "No description"
system_http_proxy = os.getenv('HTTP_PROXY')
system_https_proxy = os.getenv('HTTPS_PROXY')
fetcher_description ="No description"
# Time ONTOP of the system defined env minimum time
render_extract_delay=0
@abstractmethod
def get_error(self):
@@ -46,10 +55,6 @@ class Fetcher():
def quit(self):
return
@abstractmethod
def screenshot(self):
return
@abstractmethod
def get_last_status_code(self):
return self.status_code
@@ -59,29 +64,121 @@ class Fetcher():
def is_ready(self):
return True
# Maybe for the future, each fetcher provides its own diff output, could be used for text, image
# the current one would return javascript output (as we use JS to generate the diff)
#
# Returns tuple(mime_type, stream)
# @abstractmethod
# def return_diff(self, stream_a, stream_b):
# return
def available_fetchers():
import inspect
from changedetectionio import content_fetcher
p=[]
for name, obj in inspect.getmembers(content_fetcher):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if "html_" in name:
t=tuple([name,obj.fetcher_description])
p.append(t)
# See the if statement at the bottom of this file for how we switch between playwright and webdriver
import inspect
p = []
for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if name.startswith('html_'):
t = tuple([name, obj.fetcher_description])
p.append(t)
return p
return p
class html_webdriver(Fetcher):
class base_html_playwright(Fetcher):
fetcher_description = "Playwright {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
)
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
fetcher_description += " via '{}'".format(os.getenv("PLAYWRIGHT_DRIVER_URL"))
browser_type = ''
command_executor = ''
# Configs for Proxy setup
# In the ENV vars, is prefixed with "playwright_proxy_", so it is for example "playwright_proxy_server"
playwright_proxy_settings_mappings = ['bypass', 'server', 'username', 'password']
proxy = None
def __init__(self, proxy_override=None):
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
self.command_executor = os.getenv(
"PLAYWRIGHT_DRIVER_URL",
'ws://playwright-chrome:3000'
).strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.playwright_proxy_settings_mappings:
v = os.getenv('playwright_proxy_' + k, False)
if v:
proxy_args[k] = v.strip('"')
if proxy_args:
self.proxy = proxy_args
# allow per-watch proxy selection override
if proxy_override:
self.proxy = {'server': proxy_override}
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False):
from playwright.sync_api import sync_playwright
import playwright._impl._api_types
from playwright._impl._api_types import Error, TimeoutError
with sync_playwright() as p:
browser_type = getattr(p, self.browser_type)
# Seemed to cause a connection Exception even tho I can see it connect
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
browser = browser_type.connect_over_cdp(self.command_executor, timeout=timeout * 1000)
# Set user agent to prevent Cloudflare from blocking the browser
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context(
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
proxy=self.proxy
)
page = context.new_page()
page.set_viewport_size({"width": 1280, "height": 1024})
try:
response = page.goto(url, timeout=timeout * 1000, wait_until='commit')
# Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading.
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
# This seemed to solve nearly all 'TimeoutErrors'
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
page.wait_for_timeout(extra_wait * 1000)
except playwright._impl._api_types.TimeoutError as e:
raise EmptyReply(url=url, status_code=None)
if response is None:
raise EmptyReply(url=url, status_code=None)
if len(page.content().strip()) == 0:
raise EmptyReply(url=url, status_code=None)
self.status_code = response.status
self.content = page.content()
self.headers = response.all_headers()
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large
page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024})
self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=90)
context.close()
browser.close()
class base_html_webdriver(Fetcher):
if os.getenv("WEBDRIVER_URL"):
fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL"))
else:
@@ -94,12 +191,11 @@ class html_webdriver(Fetcher):
selenium_proxy_settings_mappings = ['proxyType', 'ftpProxy', 'httpProxy', 'noProxy',
'proxyAutoconfigUrl', 'sslProxy', 'autodetect',
'socksProxy', 'socksVersion', 'socksUsername', 'socksPassword']
proxy = None
def __init__(self, proxy_override=None):
from selenium.webdriver.common.proxy import Proxy as SeleniumProxy
proxy=None
def __init__(self):
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.command_executor = os.getenv("WEBDRIVER_URL", 'http://browser-chrome:4444/wd/hub').strip('"')
@@ -110,6 +206,16 @@ class html_webdriver(Fetcher):
if v:
proxy_args[k] = v.strip('"')
# Map back standard HTTP_ and HTTPS_PROXY to webDriver httpProxy/sslProxy
if not proxy_args.get('webdriver_httpProxy') and self.system_http_proxy:
proxy_args['httpProxy'] = self.system_http_proxy
if not proxy_args.get('webdriver_sslProxy') and self.system_https_proxy:
proxy_args['httpsProxy'] = self.system_https_proxy
# Allows override the proxy on a per-request basis
if proxy_override is not None:
proxy_args['httpProxy'] = proxy_override
if proxy_args:
self.proxy = SeleniumProxy(raw=proxy_args)
@@ -121,6 +227,9 @@ class html_webdriver(Fetcher):
request_method,
ignore_status_codes=False):
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import WebDriverException
# request_body, request_method unused for now, until some magic in the future happens.
# check env for WEBDRIVER_URL
@@ -142,12 +251,11 @@ class html_webdriver(Fetcher):
# raise EmptyReply(url=url, status_code=r.status_code)
# @todo - dom wait loaded?
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
self.content = self.driver.page_source
self.headers = {}
def screenshot(self):
return self.driver.get_screenshot_as_png()
self.screenshot = self.driver.get_screenshot_as_png()
self.quit()
# Does the connection to the webdriver work? run a test connection.
def is_ready(self):
@@ -170,10 +278,14 @@ class html_webdriver(Fetcher):
except Exception as e:
print("Exception in chrome shutdown/quit" + str(e))
# "html_requests" is listed as the default fetcher in store.py!
class html_requests(Fetcher):
fetcher_description = "Basic fast Plaintext/HTTP Client"
def __init__(self, proxy_override=None):
self.proxy_override = proxy_override
def run(self,
url,
timeout,
@@ -182,12 +294,24 @@ class html_requests(Fetcher):
request_method,
ignore_status_codes=False):
proxies={}
# Allows override the proxy on a per-request basis
if self.proxy_override:
proxies = {'http': self.proxy_override, 'https': self.proxy_override, 'ftp': self.proxy_override}
else:
if self.system_http_proxy:
proxies['http'] = self.system_http_proxy
if self.system_https_proxy:
proxies['https'] = self.system_https_proxy
r = requests.request(method=request_method,
data=request_body,
url=url,
headers=request_headers,
timeout=timeout,
verify=False)
data=request_body,
url=url,
headers=request_headers,
timeout=timeout,
proxies=proxies,
verify=False)
# If the response did not tell us what encoding format to expect, Then use chardet to override what `requests` thinks.
# For example - some sites don't tell us it's utf-8, but return utf-8 content
@@ -207,3 +331,11 @@ class html_requests(Fetcher):
self.content = r.text
self.headers = r.headers
# Decide which is the 'real' HTML webdriver, this is more a system wide config
# rather than site-specific.
use_playwright_as_chrome_fetcher = os.getenv('PLAYWRIGHT_DRIVER_URL', False)
if use_playwright_as_chrome_fetcher:
html_webdriver = base_html_playwright
else:
html_webdriver = base_html_webdriver

View File

@@ -16,6 +16,34 @@ class perform_site_check():
super().__init__(*args, **kwargs)
self.datastore = datastore
# If there was a proxy list enabled, figure out what proxy_args/which proxy to use
# if watch.proxy use that
# fetcher.proxy_override = watch.proxy or main config proxy
# Allows override the proxy on a per-request basis
# ALWAYS use the first one is nothing selected
def set_proxy_from_list(self, watch):
proxy_args = None
if self.datastore.proxy_list is None:
return None
# If its a valid one
if any([watch['proxy'] in p for p in self.datastore.proxy_list]):
proxy_args = watch['proxy']
# not valid (including None), try the system one
else:
system_proxy = self.datastore.data['settings']['requests']['proxy']
# Is not None and exists
if any([system_proxy in p for p in self.datastore.proxy_list]):
proxy_args = system_proxy
# Fallback - Did not resolve anything, use the first available
if proxy_args is None:
proxy_args = self.datastore.proxy_list[0][0]
return proxy_args
def run(self, uuid):
timestamp = int(time.time()) # used for storage etc too
@@ -66,8 +94,18 @@ class perform_site_check():
# If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests")
fetcher = klass()
proxy_args = self.set_proxy_from_list(watch)
fetcher = klass(proxy_override=proxy_args)
# Configurable per-watch or global extra delay before extracting text (for webDriver types)
system_webdriver_delay = self.datastore.data['settings']['application'].get('webdriver_delay', None)
if watch['webdriver_delay'] is not None:
fetcher.render_extract_delay = watch['webdriver_delay']
elif system_webdriver_delay is not None:
fetcher.render_extract_delay = system_webdriver_delay
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code)
# Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base?
@@ -117,11 +155,13 @@ class perform_site_check():
# Then we assume HTML
if has_filter_rule:
# For HTML/XML we offer xpath as an option, just start a regular xPath "/.."
if css_filter_rule[0] == '/':
html_content = html_tools.xpath_filter(xpath_filter=css_filter_rule, html_content=fetcher.content)
if css_filter_rule[0] == '/' or css_filter_rule.startswith('xpath:'):
html_content = html_tools.xpath_filter(xpath_filter=css_filter_rule.replace('xpath:', ''),
html_content=fetcher.content)
else:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
html_content = html_tools.css_filter(css_filter=css_filter_rule, html_content=fetcher.content)
if has_subtractive_selectors:
html_content = html_tools.element_removal(subtractive_selectors, html_content)
@@ -141,10 +181,14 @@ class perform_site_check():
# Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# Treat pages with no renderable text content as a change? No by default
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0:
raise content_fetcher.ReplyWithContentButNoText(url=url, status_code=200)
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
@@ -192,9 +236,4 @@ class perform_site_check():
if not watch['title'] or not len(watch['title']):
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
if self.datastore.data['settings']['application'].get('real_browser_save_screenshot', True):
screenshot = fetcher.screenshot()
fetcher.quit()
return changed_detected, update_obj, text_content_before_ignored_filter, screenshot
return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot

View File

@@ -318,6 +318,7 @@ class commonSettingsForm(Form):
notification_format = SelectField('Notification format', choices=valid_notification_formats.keys(), default=default_notification_format)
fetch_backend = RadioField(u'Fetch method', choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False)
webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1, message="Should contain one or more seconds")] )
class watchForm(commonSettingsForm):
@@ -337,9 +338,9 @@ class watchForm(commonSettingsForm):
method = SelectField('Request method', choices=valid_method, default=default_method)
ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False)
trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()])
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})
save_and_preview_button = SubmitField('Save & Preview', render_kw={"class": "pure-button pure-button-primary"})
proxy = RadioField('Proxy')
def validate(self, **kwargs):
if not super().validate():
@@ -358,6 +359,7 @@ class watchForm(commonSettingsForm):
# datastore.data['settings']['requests']..
class globalSettingsRequestForm(Form):
time_between_check = FormField(TimeBetweenCheckForm)
proxy = RadioField('Proxy')
# datastore.data['settings']['application']..
@@ -369,6 +371,7 @@ class globalSettingsApplicationForm(commonSettingsForm):
ignore_whitespace = BooleanField('Ignore whitespace')
real_browser_save_screenshot = BooleanField('Save last screenshot when using Chrome?')
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
password = SaltyPasswordField()
@@ -382,4 +385,3 @@ class globalSettingsForm(Form):
requests = FormField(globalSettingsRequestForm)
application = FormField(globalSettingsApplicationForm)
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})

View File

@@ -0,0 +1,133 @@
from abc import ABC, abstractmethod
import time
import validators
class Importer():
remaining_data = []
new_uuids = []
good = 0
def __init__(self):
self.new_uuids = []
self.good = 0
self.remaining_data = []
@abstractmethod
def run(self,
data,
flash,
datastore):
pass
class import_url_list(Importer):
"""
Imports a list, can be in <code>https://example.com tag1, tag2, last tag</code> format
"""
def run(self,
data,
flash,
datastore,
):
urls = data.split("\n")
good = 0
now = time.time()
if (len(urls) > 5000):
flash("Importing 5,000 of the first URLs from your list, the rest can be imported again.")
for url in urls:
url = url.strip()
if not len(url):
continue
tags = ""
# 'tags' should be a csv list after the URL
if ' ' in url:
url, tags = url.split(" ", 1)
# Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server
if len(url) and validators.url(url.replace('source:', '')) and good < 5000:
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
continue
# Worked past the 'continue' above, append it to the bad list
if self.remaining_data is None:
self.remaining_data = []
self.remaining_data.append(url)
flash("{} Imported from list in {:.2f}s, {} Skipped.".format(good, time.time() - now, len(self.remaining_data)))
class import_distill_io_json(Importer):
def run(self,
data,
flash,
datastore,
):
import json
good = 0
now = time.time()
self.new_uuids=[]
try:
data = json.loads(data.strip())
except json.decoder.JSONDecodeError:
flash("Unable to read JSON file, was it broken?", 'error')
return
if not data.get('data'):
flash("JSON structure looks invalid, was it broken?", 'error')
return
for d in data.get('data'):
d_config = json.loads(d['config'])
extras = {'title': d['name']}
if len(d['uri']) and good < 5000:
try:
# @todo we only support CSS ones at the moment
if d_config['selections'][0]['frames'][0]['excludes'][0]['type'] == 'css':
extras['subtractive_selectors'] = d_config['selections'][0]['frames'][0]['excludes'][0]['expr']
except KeyError:
pass
except IndexError:
pass
try:
extras['css_filter'] = d_config['selections'][0]['frames'][0]['includes'][0]['expr']
if d_config['selections'][0]['frames'][0]['includes'][0]['type'] == 'xpath':
extras['css_filter'] = 'xpath:' + extras['css_filter']
except KeyError:
pass
except IndexError:
pass
try:
extras['tag'] = ", ".join(d['tags'])
except KeyError:
pass
except IndexError:
pass
new_uuid = datastore.add_watch(url=d['uri'].strip(),
extras=extras,
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
flash("{} Imported from Distill.io in {:.2f}s, {} Skipped.".format(len(self.new_uuids), time.time() - now, len(self.remaining_data)))

View File

@@ -23,12 +23,14 @@ class model(dict):
'requests': {
'timeout': 15, # Default 15 seconds
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'workers': 10 # Number of threads, lower is better for slow connections
'workers': 10, # Number of threads, lower is better for slow connections
'proxy': None # Preferred proxy connection
},
'application': {
'password': False,
'base_url' : None,
'extract_title_as_title': False,
'empty_pages_are_a_change': False,
'fetch_backend': os.getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [],
@@ -40,7 +42,8 @@ class model(dict):
'notification_body': default_notification_body,
'notification_format': default_notification_format,
'real_browser_save_screenshot': True,
'schema_version' : 0
'schema_version' : 0,
'webdriver_delay': None # Extra delay in seconds before extracting text
}
}
}

View File

@@ -39,10 +39,12 @@ class model(dict):
'trigger_text': [], # List of text or regex to wait for until a change is detected
'fetch_backend': None,
'extract_title_as_title': False,
'proxy': None, # Preferred proxy connection
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None}
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'webdriver_delay': None
}
def __init__(self, *arg, **kw):

View File

@@ -0,0 +1,16 @@
$(document).ready(function() {
function toggle() {
if ($('input[name="application-fetch_backend"]:checked').val() != 'html_requests') {
$('#requests-override-options').hide();
$('#webdriver-override-options').show();
} else {
$('#requests-override-options').show();
$('#webdriver-override-options').hide();
}
}
$('input[name="application-fetch_backend"]').click(function (e) {
toggle();
});
toggle();
});

View File

@@ -1,13 +0,0 @@
window.addEventListener("load", (event) => {
// just an example for now
function toggleVisible(elem) {
// theres better ways todo this
var x = document.getElementById(elem);
if (x.style.display === "block") {
x.style.display = "none";
} else {
x.style.display = "block";
}
}
});

View File

@@ -0,0 +1,16 @@
$(document).ready(function() {
function toggle() {
if ($('input[name="fetch_backend"]:checked').val() != 'html_requests') {
$('#requests-override-options').hide();
$('#webdriver-override-options').show();
} else {
$('#requests-override-options').show();
$('#webdriver-override-options').hide();
}
}
$('input[name="fetch_backend"]').click(function (e) {
toggle();
});
toggle();
});

View File

@@ -309,10 +309,10 @@ footer {
font-weight: bold; }
.pure-form textarea {
width: 100%; }
.pure-form ul.fetch-backend {
.pure-form .inline-radio ul {
margin: 0px;
list-style: none; }
.pure-form ul.fetch-backend li > * {
.pure-form .inline-radio ul li > * {
display: inline-block; }
@media only screen and (max-width: 760px), (min-device-width: 768px) and (max-device-width: 1024px) {

View File

@@ -418,14 +418,16 @@ footer {
textarea {
width: 100%;
}
ul.fetch-backend {
margin: 0px;
list-style: none;
li {
> * {
display: inline-block;
.inline-radio {
ul {
margin: 0px;
list-style: none;
li {
> * {
display: inline-block;
}
}
}
}
}
}

View File

@@ -13,8 +13,7 @@ from threading import Lock
import re
import requests
from changedetectionio.model import Watch, App
from . model import App, Watch
# Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods?
# Open a github issue if you know something :)
@@ -33,6 +32,7 @@ class ChangeDetectionStore:
self.needs_write = False
self.datastore_path = datastore_path
self.json_store_path = "{}/url-watches.json".format(self.datastore_path)
self.proxy_list = None
self.stop_thread = False
self.__data = App.model()
@@ -111,6 +111,14 @@ class ChangeDetectionStore:
secret = secrets.token_hex(16)
self.__data['settings']['application']['rss_access_token'] = secret
# Proxy list support - available as a selection in settings when text file is imported
# CSV list
# "name, address", or just "name"
proxy_list_file = "{}/proxies.txt".format(self.datastore_path)
if path.isfile(proxy_list_file):
self.import_proxy_list(proxy_list_file)
# Bump the update version by running updates
self.run_updates()
@@ -260,46 +268,14 @@ class ChangeDetectionStore:
return self.data['watching'][uuid].get(val)
# Remove a watchs data but keep the entry (URL etc)
def scrub_watch(self, uuid, limit_timestamp = False):
def scrub_watch(self, uuid):
import pathlib
import hashlib
del_timestamps = []
self.__data['watching'][uuid].update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'newest_history_key': 0, 'previous_md5': False})
self.needs_write_urgent = True
changes_removed = 0
for timestamp, path in self.data['watching'][uuid]['history'].items():
if not limit_timestamp or (limit_timestamp is not False and int(timestamp) > limit_timestamp):
self.unlink_history_file(path)
del_timestamps.append(timestamp)
changes_removed += 1
if not limit_timestamp:
self.data['watching'][uuid]['last_checked'] = 0
self.data['watching'][uuid]['last_changed'] = 0
self.data['watching'][uuid]['previous_md5'] = ""
for timestamp in del_timestamps:
del self.data['watching'][uuid]['history'][str(timestamp)]
# If there was a limitstamp, we need to reset some meta data about the entry
# This has to happen after we remove the others from the list
if limit_timestamp:
newest_key = self.get_newest_history_key(uuid)
if newest_key:
self.data['watching'][uuid]['last_checked'] = int(newest_key)
# @todo should be the original value if it was less than newest key
self.data['watching'][uuid]['last_changed'] = int(newest_key)
try:
with open(self.data['watching'][uuid]['history'][str(newest_key)], "rb") as fp:
content = fp.read()
self.data['watching'][uuid]['previous_md5'] = hashlib.md5(content).hexdigest()
except (FileNotFoundError, IOError):
self.data['watching'][uuid]['previous_md5'] = ""
pass
self.needs_write = True
return changes_removed
for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"):
unlink(item)
def add_watch(self, url, tag="", extras=None, write_to_disk_now=True):
if extras is None:
@@ -453,10 +429,26 @@ class ChangeDetectionStore:
import pathlib
# Only in the sub-directories
for item in pathlib.Path(self.datastore_path).rglob("*/*txt"):
if not str(item) in index:
print ("Removing",item)
unlink(item)
for uuid in self.data['watching']:
for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"):
if not str(item) in index:
print ("Removing",item)
unlink(item)
def import_proxy_list(self, filename):
import csv
with open(filename, newline='') as f:
reader = csv.reader(f, skipinitialspace=True)
# @todo This loop can could be improved
l = []
for row in reader:
if len(row):
if len(row)>=2:
l.append(tuple(row[:2]))
else:
l.append(tuple([row[0], row[0]]))
self.proxy_list = l if len(l) else None
# Run all updates
# IMPORTANT - Each update could be run even when they have a new install and the schema is correct

View File

@@ -2,7 +2,6 @@
{% from '_helpers.jinja' import render_field %}
{% macro render_common_settings_form(form, current_base_url, emailprefix) %}
<div class="pure-control-group">
{{ render_field(form.notification_urls, rows=5, placeholder="Examples:
Gitter - gitter://token/room

View File

@@ -9,6 +9,7 @@
const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');
{% endif %}
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<div class="edit-form monospaced-textarea">
@@ -57,20 +58,40 @@
</div>
<div class="tab-pane-inner" id="request">
<div class="pure-control-group">
<div class="pure-control-group inline-radio">
{{ render_field(form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>Use the <strong>Basic</strong> method (default) where your watched site doesn't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p>
</span>
</div>
<hr/>
<fieldset class="pure-group">
<span class="pure-form-message-inline">
{% if form.proxy %}
<div class="pure-control-group inline-radio">
{{ render_field(form.proxy, class="fetch-backend-proxy") }}
<span class="pure-form-message-inline">
Choose a proxy for this watch
</span>
</div>
{% endif %}
<fieldset class="pure-group" id="webdriver-override-options">
<div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/>
This will wait <i>n</i> seconds before extracting the text.
</div>
<div class="pure-control-group">
{{ render_field(form.webdriver_delay) }}
</div>
{% if using_global_webdriver_wait %}
<div class="pure-form-message-inline">
<strong>Using the current global default settings</strong>
</div>
{% endif %}
</fieldset>
<fieldset class="pure-group" id="requests-override-options">
<div class="pure-form-message-inline">
<strong>Request override is currently only used by the <i>Basic fast Plaintext/HTTP Client</i> method.</strong>
</span>
</div>
<div class="pure-control-group">
{{ render_field(form.method) }}
</div>
@@ -125,7 +146,7 @@ User-Agent: wonderbra 1.0") }}
<li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li>
<li>JSON - Limit text to this JSON rule, using <a href="https://pypi.org/project/jsonpath-ng/">JSONPath</a>, prefix with <code>"json:"</code>, use <code>json:$</code> to force re-formatting if required, <a
href="https://jsonpath.com/" target="new">test your JSONPath here</a></li>
<li>XPath - Limit text to this XPath rule, simply start with a forward-slash, example <code>//*[contains(@class, 'sametext')]</code>, <a
<li>XPath - Limit text to this XPath rule, simply start with a forward-slash, example <code>//*[contains(@class, 'sametext')]</code> or <code>xpath://*[contains(@class, 'sametext')]</code>, <a
href="http://xpather.com/" target="new">test your XPath here</a></li>
</ul>
Please be sure that you thoroughly understand how to write CSS or JSONPath, XPath selector rules before filing an issue on GitHub! <a

View File

@@ -1,30 +1,86 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<div class="inner">
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="edit-form monospaced-textarea">
<div class="tabs collapsable">
<ul>
<li class="tab" id="default-tab"><a href="#url-list">URL List</a></li>
<li class="tab"><a href="#distill-io">Distill.io</a></li>
</ul>
</div>
<div class="box-wrap inner">
<form class="pure-form pure-form-aligned" action="{{url_for('import_page')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<fieldset class="pure-group">
<legend>
Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma (,):
<br>
<code>https://example.com tag1, tag2, last tag</code>
<br>
URLs which do not pass validation will stay in the textarea.
</legend>
<div class="tab-pane-inner" id="url-list">
<fieldset class="pure-group">
<legend>
Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma
(,):
<br>
<code>https://example.com tag1, tag2, last tag</code>
<br>
URLs which do not pass validation will stay in the textarea.
</legend>
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="25">{{ remaining }}</textarea>
</fieldset>
overflow-x: scroll;" rows="25">{{ import_url_list_remaining }}</textarea>
</fieldset>
</div>
<div class="tab-pane-inner" id="distill-io">
<fieldset class="pure-group">
<legend>
Copy and Paste your Distill.io watch 'export' file, this should be a JSON file.</br>
This is <i>experimental</i>, supported fields are <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, the rest (including <code>schedule</code>) are ignored.
<br/>
<p>
How to export? <a href="https://distill.io/docs/web-monitor/how-export-and-import-monitors/">https://distill.io/docs/web-monitor/how-export-and-import-monitors/</a><br/>
Be sure to set your default fetcher to Chrome if required.</br>
</p>
</legend>
<textarea name="distill-io" class="pure-input-1-2" style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" placeholder="Example Distill.io JSON export file
{
&quot;client&quot;: {
&quot;local&quot;: 1
},
&quot;data&quot;: [
{
&quot;name&quot;: &quot;Unraid | News&quot;,
&quot;uri&quot;: &quot;https://unraid.net/blog&quot;,
&quot;config&quot;: &quot;{\&quot;selections\&quot;:[{\&quot;frames\&quot;:[{\&quot;index\&quot;:0,\&quot;excludes\&quot;:[],\&quot;includes\&quot;:[{\&quot;type\&quot;:\&quot;xpath\&quot;,\&quot;expr\&quot;:\&quot;(//div[@id='App']/div[contains(@class,'flex')]/main[contains(@class,'relative')]/section[contains(@class,'relative')]/div[@class='container']/div[contains(@class,'flex')]/div[contains(@class,'w-full')])[1]\&quot;}]}],\&quot;dynamic\&quot;:true,\&quot;delay\&quot;:2}],\&quot;ignoreEmptyText\&quot;:true,\&quot;includeStyle\&quot;:false,\&quot;dataAttr\&quot;:\&quot;text\&quot;}&quot;,
&quot;tags&quot;: [],
&quot;content_type&quot;: 2,
&quot;state&quot;: 40,
&quot;schedule&quot;: &quot;{\&quot;type\&quot;:\&quot;INTERVAL\&quot;,\&quot;params\&quot;:{\&quot;interval\&quot;:4447}}&quot;,
&quot;ts&quot;: &quot;2022-03-27T15:51:15.667Z&quot;
}
]
}
" rows="25">{{ original_distill_json }}</textarea>
</fieldset>
</div>
<button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button>
</form>
</div>
</div>
</div>
{% endblock %}

View File

@@ -7,7 +7,7 @@
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<fieldset>
<div class="pure-control-group">
This will remove all version snapshots/data, but keep your list of URLs. <br/>
This will remove ALL version snapshots/data, but keep your list of URLs. <br/>
You may like to use the <strong>BACKUP</strong> link first.<br/>
</div>
<br/>
@@ -17,12 +17,6 @@
<span class="pure-form-message-inline">Type in the word <strong>scrub</strong> to confirm that you understand!</span>
</div>
<br/>
<div class="pure-control-group">
<label for="confirmtext">Optional: Limit deletion of snapshots to snapshots <i>newer</i> than date/time</label>
<input type="datetime-local" id="limit_date" name="limit_date" />
<span class="pure-form-message-inline">dd/mm/yyyy hh:mm (24 hour format)</span>
</div>
<br/>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Scrub!</button>
</div>

View File

@@ -9,10 +9,10 @@
const email_notification_prefix=JSON.parse('{{emailprefix|tojson}}');
{% endif %}
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='settings.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='global-settings.js')}}" defer></script>
<div class="edit-form">
<div class="tabs collapsable">
<ul>
@@ -62,6 +62,18 @@
<span class="pure-form-message-inline">When using a Chrome browser, a screenshot from the last check will be available on the Diff page</span>
</div>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.empty_pages_are_a_change) }}
<span class="pure-form-message-inline">When a page contains HTML, but no renderable text appears (empty page), is this considered a change?</span>
</div>
{% if form.requests.proxy %}
<div class="pure-control-group inline-radio">
{{ render_field(form.requests.form.proxy, class="fetch-backend-proxy") }}
<span class="pure-form-message-inline">
Choose a default proxy for all watches
</span>
</div>
{% endif %}
</fieldset>
</div>
@@ -74,13 +86,23 @@
</div>
<div class="tab-pane-inner" id="fetching">
<div class="pure-control-group">
<div class="pure-control-group inline-radio">
{{ render_field(form.application.form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>Use the <strong>Basic</strong> method (default) where your watched sites don't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p>
</span>
</div>
<fieldset class="pure-group" id="webdriver-override-options">
<div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/>
This will wait <i>n</i> seconds before extracting the text.
</div>
<div class="pure-control-group">
{{ render_field(form.application.form.webdriver_delay) }}
</div>
</fieldset>
</div>

View File

@@ -13,7 +13,7 @@
{{ render_simple_field(form.tag, value=active_tag if active_tag else '', placeholder="watch group") }}
<button type="submit" class="pure-button pure-button-primary">Watch</button>
</fieldset>
<span style="color:#eee; font-size: 80%;"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread.svg')}}" /> Tip: You can also add 'shared' watches. <a href="#">More info</a></a></span>
<span style="color:#eee; font-size: 80%;"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread.svg')}}" /> Tip: You can also add 'shared' watches. <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">More info</a></a></span>
</form>
<div>
<a href="{{url_for('index')}}" class="pure-button button-tag {{'active' if not active_tag }}">All</a>

View File

@@ -5,18 +5,17 @@ import time
from flask import url_for
from .util import live_server_setup
def test_import(client, live_server):
def test_setup(client, live_server):
live_server_setup(live_server)
def test_import(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
res = client.post(
url_for("import_page"),
data={
"distill-io": "",
"urls": """https://example.com
https://example.com tag1
https://example.com tag1, other tag"""
@@ -26,3 +25,96 @@ https://example.com tag1, other tag"""
assert b"3 Imported" in res.data
assert b"tag1" in res.data
assert b"other tag" in res.data
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get( url_for("index"))
res = client.get( url_for("index"))
def xtest_import_skip_url(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
res = client.post(
url_for("import_page"),
data={
"distill-io": "",
"urls": """https://example.com
:ht000000broken
"""
},
follow_redirects=True,
)
assert b"1 Imported" in res.data
assert b"ht000000broken" in res.data
assert b"1 Skipped" in res.data
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get( url_for("index"))
def test_import_distillio(client, live_server):
distill_data='''
{
"client": {
"local": 1
},
"data": [
{
"name": "Unraid | News",
"uri": "https://unraid.net/blog",
"config": "{\\"selections\\":[{\\"frames\\":[{\\"index\\":0,\\"excludes\\":[],\\"includes\\":[{\\"type\\":\\"xpath\\",\\"expr\\":\\"(//div[@id='App']/div[contains(@class,'flex')]/main[contains(@class,'relative')]/section[contains(@class,'relative')]/div[@class='container']/div[contains(@class,'flex')]/div[contains(@class,'w-full')])[1]\\"}]}],\\"dynamic\\":true,\\"delay\\":2}],\\"ignoreEmptyText\\":true,\\"includeStyle\\":false,\\"dataAttr\\":\\"text\\"}",
"tags": ["nice stuff", "nerd-news"],
"content_type": 2,
"state": 40,
"schedule": "{\\"type\\":\\"INTERVAL\\",\\"params\\":{\\"interval\\":4447}}",
"ts": "2022-03-27T15:51:15.667Z"
}
]
}
'''
# Give the endpoint time to spin up
time.sleep(1)
client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
res = client.post(
url_for("import_page"),
data={
"distill-io": distill_data,
"urls" : ''
},
follow_redirects=True,
)
assert b"Unable to read JSON file, was it broken?" not in res.data
assert b"1 Imported from Distill.io" in res.data
res = client.get( url_for("edit_page", uuid="first"))
assert b"https://unraid.net/blog" in res.data
assert b"Unraid | News" in res.data
# flask/wtforms should recode this, check we see it
# wtforms encodes it like id=&#39 ,but html.escape makes it like id=&#x27
# - so just check it manually :(
#import json
#import html
#d = json.loads(distill_data)
# embedded_d=json.loads(d['data'][0]['config'])
# x=html.escape(embedded_d['selections'][0]['frames'][0]['includes'][0]['expr']).encode('utf-8')
assert b"xpath:(//div[@id=&#39;App&#39;]/div[contains(@class,&#39;flex&#39;)]/main[contains(@class,&#39;relative&#39;)]/section[contains(@class,&#39;relative&#39;)]/div[@class=&#39;container&#39;]/div[contains(@class,&#39;flex&#39;)]/div[contains(@class,&#39;w-full&#39;)])[1]" in res.data
# did the tags work?
res = client.get( url_for("index"))
assert b"nice stuff" in res.data
assert b"nerd-news" in res.data
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get(url_for("index"))

View File

@@ -0,0 +1,102 @@
#!/usr/bin/python3
import time
from flask import url_for
from urllib.request import urlopen
from .util import set_original_response, set_modified_response, live_server_setup
sleep_time_for_fetch_thread = 3
def set_nonrenderable_response():
test_return_data = """<html>
<head><title>modified head title</title></head>
<!-- like when some angular app was broken and doesnt render or whatever -->
<body>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def test_check_basic_change_detection_functionality(client, live_server):
set_original_response()
live_server_setup(live_server)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": url_for('test_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Do this a few times.. ensures we dont accidently set the status
for n in range(3):
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
#####################
client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
# this should not trigger a change, because no good text could be converted from the HTML
set_nonrenderable_response()
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# ok now do the opposite
client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "y",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
set_modified_response()
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
#
# Cleanup everything
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -116,4 +116,46 @@ def test_xpath_validation(client, live_server):
data={"css_filter": "/something horrible", "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"is not a valid XPath expression" in res.data
assert b"is not a valid XPath expression" in res.data
# actually only really used by the distll.io importer, but could be handy too
def test_check_with_prefix_css_filter(client, live_server):
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
# Give the endpoint time to spin up
time.sleep(1)
set_original_response()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": "xpath://*[contains(@class, 'sametext')]", "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(3)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
with open('/tmp/fuck.html', 'wb') as f:
f.write(res.data)
assert b"Some text thats the same" in res.data #in selector
assert b"Some text that will change" not in res.data #not in selector
client.get(url_for("api_delete", uuid="all"), follow_redirects=True)

View File

@@ -52,6 +52,10 @@ class update_worker(threading.Thread):
raise Exception("Error - returned data from the fetch handler SHOULD be bytes")
except PermissionError as e:
self.app.logger.error("File permission error updating", uuid, str(e))
except content_fetcher.ReplyWithContentButNoText as e:
# Totally fine, it's by choice - just continue on, nothing more to care about
# Page had elements/content but no renderable text
pass
except content_fetcher.EmptyReply as e:
# Some kind of custom to-str handler in the exception handler that does this?
err_text = "EmptyReply: Status Code {}".format(e.status_code)

View File

@@ -17,12 +17,19 @@ services:
# Alternative WebDriver/selenium URL, do not use "'s or 's!
# - WEBDRIVER_URL=http://browser-chrome:4444/wd/hub
#
# WebDriver proxy settings webdriver_proxyType, webdriver_ftpProxy, webdriver_httpProxy, webdriver_noProxy,
# webdriver_proxyAutoconfigUrl, webdriver_sslProxy, webdriver_autodetect,
# WebDriver proxy settings webdriver_proxyType, webdriver_ftpProxy, webdriver_noProxy,
# webdriver_proxyAutoconfigUrl, webdriver_autodetect,
# webdriver_socksProxy, webdriver_socksUsername, webdriver_socksVersion, webdriver_socksPassword
#
# https://selenium-python.readthedocs.io/api.html#module-selenium.webdriver.common.proxy
#
# Alternative Playwright URL, do not use "'s or 's!
# - PLAYWRIGHT_DRIVER_URL=ws://playwright-chrome:3000/
#
# Playwright proxy settings playwright_proxy_server, playwright_proxy_bypass, playwright_proxy_username, playwright_proxy_password
#
# https://playwright.dev/python/docs/api/class-browsertype#browser-type-launch-option-proxy
#
# Plain requsts - proxy support example.
# - HTTP_PROXY=socks5h://10.10.1.10:1080
# - HTTPS_PROXY=socks5h://10.10.1.10:1080
@@ -58,6 +65,13 @@ services:
# # Workaround to avoid the browser crashing inside a docker container
# # See https://github.com/SeleniumHQ/docker-selenium#quick-start
# - /dev/shm:/dev/shm
# restart: unless-stopped
# Used for fetching pages via Playwright+Chrome where you need Javascript support.
# playwright-chrome:
# hostname: playwright-chrome
# image: browserless/chrome
# restart: unless-stopped
volumes:

View File

@@ -40,3 +40,4 @@ selenium ~= 4.1.0
# need to revisit flask login versions
werkzeug ~= 2.0.0
# playwright is installed at Dockerfile build time because it's not available on all platforms