Compare commits

...

158 Commits

Author SHA1 Message Date
dgtlmoon
d40773d595 Attempt slim-bookworm ssl3 upgrade 2023-06-17 23:42:31 +02:00
dgtlmoon
a4c620c308 Code - Adding CI test for search (#1626) 2023-06-13 15:03:32 +02:00
dgtlmoon
9434eac72d 0.42.3 2023-06-12 15:28:51 +02:00
dgtlmoon
edb5e20de6 Bug fix - Fixed crash when deleting watch from UI when watch was already manually deleted from datadir (#1623) 2023-06-12 15:10:48 +02:00
dgtlmoon
e62eeb1c4a README - Update links to new website 2023-06-02 18:58:06 +02:00
Maciej Rapacz
a4e6fd1ec3 Fetcher / Parser - Automatically attempt to extract JSON from document when document contains JSON but could be wrapped in HTML (#1593) 2023-05-30 08:57:17 +02:00
dgtlmoon
d8b9f0fd78 Test improvement - Also test that custom request headers works with Playwright/Browserless (#1607) 2023-05-29 17:44:38 +02:00
dgtlmoon
f9387522ee Fetching - Be sure that content-type detection works when the headers are a mixed case (#1604) 2023-05-29 16:11:43 +02:00
dgtlmoon
ba8d2e0c2d UI/Fetching - Update "Filter not found" message to be more explanatory/helpful (#1602) 2023-05-28 12:09:51 +02:00
dgtlmoon
247db22a33 Restock monitor - Updating texts for tickets available/unavailable restock detection 2023-05-27 13:31:35 +02:00
William
aeabd5b3fc Docs - Update README.md (Changed LXML re:math reference to re:match) (#1594) 2023-05-25 16:55:52 +02:00
dgtlmoon
e9e1ce893f 0.42.2 2023-05-25 16:47:30 +02:00
dgtlmoon
b5a415c7b6 UI - Configurable pager size #1599 #1598 2023-05-25 16:38:54 +02:00
dgtlmoon
9e954532d6 Fetcher - Ability to specify headers from a textfile per watch, global or per tag ( https://github.com/dgtlmoon/changedetection.io/wiki/Adding-headers-from-an-external-file ) 2023-05-22 17:19:52 +02:00
dgtlmoon
955835df72 Restock detection - Better reporting when it fails (#1584) 2023-05-21 23:10:39 +02:00
dgtlmoon
1aeafef910 Fetcher - Puppeteer experimental fetcher wasn't returning the status-code (#1585) 2023-05-21 23:10:08 +02:00
dgtlmoon
1367197df7 Update README.md 2023-05-21 21:28:19 +02:00
dgtlmoon
143971123d 0.42.1 2023-05-21 14:20:23 +02:00
dgtlmoon
04d2d3fb00 Fetcher fix - Clear any fetch error when the fetched document was the same (clear any error that occurred between fetching a document that was the same) 2023-05-21 12:14:18 +02:00
dgtlmoon
236f0c098d 0.42 2023-05-18 22:10:10 +02:00
dgtlmoon
582c6b465b UI - "Search List" also works for 'Title' field 2023-05-18 19:24:13 +02:00
dgtlmoon
a021ba87fa UI - New "Search List" icon and functionality (#1580) 2023-05-18 18:58:49 +02:00
dgtlmoon
e9057cb851 VisualSelector - Add message when first version cannot be found 2023-05-15 16:57:39 +02:00
dgtlmoon
72ec438caa UI - update link to official project page 2023-05-15 13:31:30 +02:00
dgtlmoon
367dec48e1 BrowserSteps - Dont highlight elements that are the full page width (body, wrappers etc) 2023-05-15 10:43:33 +02:00
dgtlmoon
dd87912c88 BrowserSteps - Support for float seconds (0.5 etc) 2023-05-15 10:35:25 +02:00
dgtlmoon
0126cb0aac BrowserSteps - Session keep alive timer countdown fix 2023-05-13 00:30:37 +02:00
dgtlmoon
463b2d0449 BrowserSteps - adding setup check 2023-05-12 15:41:00 +02:00
dgtlmoon
e4f6d54ae2 BrowserSteps - Refactored to re-use playwright context which should solve some errors 2023-05-12 15:38:55 +02:00
dgtlmoon
5f338d7824 BrowserSteps - Be sure to select the most appropriate input/button/a when an input element is wrapped in a <div> or other 2023-05-12 10:35:18 +02:00
dgtlmoon
0b563a93ec Fetcher - Experimental fetcher - dont cache embedded data URLs 2023-05-11 16:52:32 +02:00
dgtlmoon
d939882dde Fetcher - Experimental fetcher improvements (Code TidyUp, Improve tests, revert to old playwright when using BrowserSteps for now) (#1564) 2023-05-11 16:36:35 +02:00
dgtlmoon
690cf4acc9 BrowserSteps - Include nice big start button SVG 2023-05-11 16:34:50 +02:00
dgtlmoon
3cb3c7ba2e BrowserSteps - Remove unreliable method for detecting if the element has a "click" listener and default to click (switch to 'Click X,Y' will return the right co-ords anyway) 2023-05-11 16:26:46 +02:00
dgtlmoon
5325918f29 Puppeteer fetcher, adding disk cache and other fixes (#1563) 2023-05-10 23:23:34 +02:00
dgtlmoon
8eee913438 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2023-05-07 14:19:38 +02:00
dgtlmoon
06921d973e UI - Adding shortcut list select button for "clear/reset history" 2023-05-07 14:19:30 +02:00
dgtlmoon
316f28a0f2 Fetcher - Experimental fetcher fixes, now only enabled with 'USE_EXPERIMENTAL_PUPPETEER_FETCH' env var (default off) (#1561) 2023-05-07 13:49:53 +02:00
dgtlmoon
3801d339f5 UI - Adding shortcut list select button for "clear/reset history" 2023-05-07 13:47:17 +02:00
dgtlmoon
d814535dc6 Element scraper - wrap offset detection in try/catch 2023-05-07 13:15:38 +02:00
dgtlmoon
cf3f3e4497 BrowserSteps - BrowserSteps was not always following proxy information 2023-05-07 13:15:29 +02:00
dgtlmoon
ba76c2a280 BrowserSteps - remove minor delay 2023-05-07 13:15:20 +02:00
dgtlmoon
94f38f052e Fetcher - playwright/browserless - Use builtin node puppeteer handler in browserless, scales way better, and is faster (#1559) 2023-05-05 21:58:08 +02:00
Raymond Ha
1710885fc4 UI - Fix back navigation / browser history (#1556) 2023-05-04 16:54:04 +02:00
dgtlmoon
2018e73240 UI - HTML validation improvements for edit forms (#1553) 2023-04-30 10:38:50 +02:00
dgtlmoon
fae8c89a4e UI - Various minor HTML validation fixes 2023-04-29 22:29:57 +02:00
dgtlmoon
40988c55c6 UI - pagination - use count including tag filter for pagination display 2023-04-29 20:19:18 +02:00
dgtlmoon
5aa713b7ea UI - Notifications - Adding icon to "Add Email" button 2023-04-29 20:14:42 +02:00
dgtlmoon
e1f5dfb703 UI - Adding pagination to watch list (#1549) 2023-04-29 19:24:13 +02:00
dgtlmoon
966600d28e UI - Set selected watches as 'viewed' (#1550) 2023-04-29 19:20:19 +02:00
dgtlmoon
e7ac356d99 UI - Fix missing </span> in watch list when using restock detection 2023-04-29 18:44:57 +02:00
dgtlmoon
e874df4ffc UI - Make sort order and type sticky in cookies, ability to sort by watch created time (#1519) 2023-04-29 17:44:23 +02:00
dgtlmoon
d1f44d0345 Notifications - Send test notification should use system defaults for body and title if not set in watch (#1547 #1503) 2023-04-29 16:20:01 +02:00
dgtlmoon
8536af0845 Adding generic changedetection.io SVG icon #1527 2023-04-14 09:50:55 +02:00
dgtlmoon
9076ba6bd3 Tests - error test - be sure to clear results from other test parts 2023-04-06 16:12:18 +02:00
dgtlmoon
43af18e2bc Update README.md 2023-04-06 15:26:06 +02:00
dgtlmoon
ad75e8cdd0 Tests - Add test to check that low level fetch errors are cleared on next check 2023-04-06 14:46:08 +02:00
dgtlmoon
f604643356 Restock alerts - adding extra detection texts 2023-04-06 13:51:33 +02:00
dgtlmoon
d5fd22f693 Restock monitor - Identify the cases where the product is also definitely in stock (#1489) 2023-03-23 18:34:56 +01:00
dgtlmoon
1d9d11b3f5 Automated CI test for ensuring pypi package was built correctly (#1488) 2023-03-23 12:20:18 +01:00
dgtlmoon
f49464f451 GitHub container build - 'provenance' was disabled 2023-03-22 10:40:49 +01:00
dgtlmoon
bc6bde4062 0.41.1 2023-03-21 23:16:01 +01:00
dgtlmoon
2863167f45 Fix for pip installations 2023-03-21 23:15:53 +01:00
dgtlmoon
ce3966c104 0.41 2023-03-21 20:30:21 +01:00
dgtlmoon
d5f574ca17 Notifications - Include triggered text token as {{triggered_text}} in notifications, so you can send just the content that matches. (#1485) 2023-03-21 19:16:13 +01:00
dgtlmoon
c96ece170a Notification tokens - add comment that the {{tokens}} can be used in the URLs also 2023-03-21 19:04:12 +01:00
dgtlmoon
1fb90bbddc Quick add form - adjust font size and rename stock recheck 2023-03-20 20:19:32 +01:00
dgtlmoon
55b6ae86e8 Ability to set which text to process triggers on (added, removed, changed) according to the difference (#1483) 2023-03-20 20:16:57 +01:00
dgtlmoon
66b892f770 Restock / stock / out of stock monitor - bumping detection texts 2023-03-20 15:01:52 +01:00
dgtlmoon
3b80bb2f0e Use brotli for reducing the size of the text snapshots (#1482) 2023-03-19 21:12:22 +01:00
dgtlmoon
e6d2d87b31 Notification screenshots - now PNG only for now to save disk space (no point creating two images) (#1481) 2023-03-18 20:52:52 +01:00
dgtlmoon
6e71088cde New feature - Restock / stock / out of stock monitor option/mode 2023-03-18 20:36:26 +01:00
dgtlmoon
2bc988dffc UI - Clone/copy watch - A paused watch should not be checked when copied/cloned #1471. 2023-03-17 23:58:15 +01:00
dgtlmoon
a578de36c5 Update README.md 2023-03-17 16:56:29 +01:00
dgtlmoon
4c74d39df0 Code - Abstract out the diff fetch types to make it easier to integrate new ones (#1467) 2023-03-12 18:11:53 +01:00
dgtlmoon
c454cbb808 BrowserSteps - Adding Goto URL step 2023-03-12 17:22:56 +01:00
dgtlmoon
6f1eec0d5a Fixing bad linebreak definition </br> in notifications and UI (#1465) 2023-03-12 17:05:34 +01:00
reecespieces
0d05ee1586 Notification Improvements - New tokens {{diff_added}} and {{diff_removed}}, removed whitespace around added and into ( Issue #905 ) (#1454) 2023-03-12 16:21:47 +01:00
dgtlmoon
23476f0e70 Update README.md 2023-03-01 23:13:35 +01:00
dgtlmoon
cf363971c1 Bug - False change alerts - code cleanups Re #962 (#1444) 2023-02-28 18:04:58 +01:00
dgtlmoon
35409f79bf Update README.md 2023-02-28 14:55:43 +01:00
dgtlmoon
fc88306805 Be sure that process_changedetection_results is off after PageUnloadable and EmptyReply exceptions from fetcher - Re #962 (#1439) 2023-02-26 13:54:14 +01:00
dgtlmoon
8253074d56 False change alerts fix - Don't reset watch checksum when a fetch error happens, adjust test to not test for fluctuating filter (#1437) 2023-02-25 22:14:47 +01:00
Fabian Affolter
5f9c8db3e1 Library update - Replace bs4 with beautifulsoup4 (#1433) 2023-02-25 22:06:13 +01:00
dgtlmoon
abf234298c API - Including last_changed timestamp in watch API info (#1436) 2023-02-25 22:00:46 +01:00
Hmmbob
0e1032a36a Update apprise to 1.3.0 (#1430) 2023-02-25 21:06:12 +01:00
dgtlmoon
3b96e40464 API documentation - improving example for list watches 2023-02-22 23:43:14 +01:00
dgtlmoon
c747cf7ba8 API documentation - improving example for snapshot history 2023-02-22 23:40:16 +01:00
dgtlmoon
3e98c8ae4b API - Adding current version to 'System Information' endpoint, bumping API docs, Re #1429 2023-02-22 23:34:36 +01:00
dgtlmoon
aaad71fc19 Further improving API documentation Re #1426 2023-02-22 21:30:02 +01:00
dgtlmoon
78f93113d8 Improving API documentation Re #1426 2023-02-22 20:57:01 +01:00
dgtlmoon
e9e586205a Browser Steps - Adding "Wait for text" and "Wait for text in element" Re #1427 2023-02-22 20:10:21 +01:00
dgtlmoon
89f1ba58b6 Re #1382 - UI fix - sorting now works with selected tag 2023-02-17 20:39:18 +01:00
dgtlmoon
6f4fd011e3 Dont rewrite/resave snapshot when its the same data, just bump the history index, saves disk space. (#1414) 2023-02-17 17:15:27 +01:00
dgtlmoon
900dc5ee78 Fetching - False alerts issue #962 - be sure to avoid triggering changedetection when checksums were the same (#1410) 2023-02-17 16:59:03 +01:00
dgtlmoon
7b8b50138b Deleting a watch now removes the entire watch storage directory (#1408) 2023-02-11 14:10:54 +01:00
dgtlmoon
01af21f856 Use year/date in the backup snapshot zip filename instead of epoch seconds (#1377 #1407) 2023-02-11 13:44:16 +01:00
dgtlmoon
f7f4ab314b PDF text conversion - fix bug where it detected a site as a PDF file incorrectly Re #1392 #1393 2023-02-08 09:32:57 +01:00
dgtlmoon
ce0355c0ad Remove unused code (#1394) 2023-02-08 09:32:15 +01:00
dgtlmoon
0f43213d9d UI - preview page - Fix bug where playwright/chrome was system default and [preview] didnt show snapshot 2023-02-07 16:55:34 +01:00
dgtlmoon
93c57d9fad Adding example docker-compose.yml config to ignore errors from self-signed certs #1389 2023-02-06 17:24:12 +01:00
dgtlmoon
3cdd075baf 0.40.2 2023-02-03 19:20:13 +01:00
dgtlmoon
5c617e8530 Code cleanup - remove unused import 2023-02-03 18:35:58 +01:00
dgtlmoon
1a48965ba1 UI fix - Fix logic for showing screenshot on diff page (#1379) 2023-02-03 11:23:48 +01:00
dgtlmoon
41856c4ed8 Re #1365 - Playwright - Browser "Service Workers" should be enabled by default but unset via env var PLAYWRIGHT_SERVICE_WORKERS=block (#1367) 2023-02-01 20:50:40 +01:00
dgtlmoon
0ed897c50f New setting to allow passwordless access to your 'diff' page - perfect for sharing your diff page securely, refactored login code (#1357) 2023-01-29 22:36:55 +01:00
dgtlmoon
f8e587c415 Security - Possible stored XSS in watch list - Only permit HTTP/HTTP/FTP by default - override with env var SAFE_PROTOCOL_REGEX (#1359) 2023-01-29 11:12:06 +01:00
dgtlmoon
d47a25eb6d Playwright - Removing old bug fix where playwright needed screenshot called twice to make the full screen screenshot be actually fullscreen (#1356) 2023-01-28 15:02:53 +01:00
dgtlmoon
9a0792d185 Fetch backend UI default fixes for VisualSelector and BrowserSteps (#1344) 2023-01-25 19:47:54 +01:00
dgtlmoon
948ef7ade4 Fix fetch UI default fetch backend option icon (#1343) 2023-01-25 18:07:44 +01:00
dgtlmoon
0ba139f8f9 Docker container build - docker container buildx version change causing errors with watchtower and others (#1336) 2023-01-24 23:45:43 +01:00
dgtlmoon
a9431191fc 0.40.1.1 2023-01-22 13:03:15 +01:00
dgtlmoon
774451f256 Re #1328 - add -6 flag to enable IPv6 (#1329) 2023-01-22 11:10:25 +01:00
dgtlmoon
04577cbf32 0.40.1.0 2023-01-21 15:38:54 +01:00
dgtlmoon
f2864af8f1 Update README.md 2023-01-21 14:02:14 +01:00
dgtlmoon
9a36d081c4 Setting docker-compose.yml version to 3.2 so it works with portainer and others #1306 #1144 #1079 2023-01-21 13:50:36 +01:00
dgtlmoon
7048a0acbd UI - Fix wrong logic when dealing with webdriver/playwright watch screenshot settings (#1325) 2023-01-21 13:47:32 +01:00
dgtlmoon
fba719ab8d Ability for watch to use a more obvious system default fetcher (#1320) 2023-01-19 21:57:58 +01:00
dgtlmoon
7c5e2d00af Update README.md 2023-01-17 22:02:51 +01:00
dgtlmoon
02b8fc0c18 pip - eventlet doesnt support dnspython >=2.3.0 (Fixes build error) 2023-01-17 22:01:56 +01:00
dgtlmoon
de15dfd80d Reliability fix - Remove loop that could cause app to stop checking if data changes (#1313) 2023-01-15 16:12:47 +01:00
dgtlmoon
024c8d8fd5 API - Improvements, support PUT for updating existing watch, set muted state, set paused state, see https://changedetection.io/docs/api_v1/index.html (#1213) 2023-01-10 19:00:57 +01:00
dgtlmoon
fab7d325f7 Data storage - Don't recreate DB if its corrupt, exit with error cleanly so operator can look into the problem (#1296) 2023-01-08 14:47:31 +01:00
jtagcat
58c7cbeac7 UI: Updating queued success message (#1285) 2023-01-05 21:12:02 +01:00
Abhishek Malani
ab9efdfd14 README.md - Fix release link (#1277) 2022-12-29 11:06:51 +01:00
Hmmbob
65d5a5d34c Notifications: updating apprise (slack notification fixes and others) (#1272) 2022-12-28 18:34:55 +01:00
dgtlmoon
93c157ee7f Remove docker-compose version so it works on any modern version #1144 (#1268) 2022-12-26 20:37:31 +01:00
Bill Metangmo
de85db887c Update the docker compose file to any version (#1079) (#1144) 2022-12-26 20:36:42 +01:00
dgtlmoon
50805ca38a IPv6 support for listening on (#1267) 2022-12-26 20:36:16 +01:00
dgtlmoon
fc6424c39e Test improvements (#1264) 2022-12-26 14:17:40 +01:00
dgtlmoon
f0966eb23a 0.40.0.4 2022-12-25 18:25:45 +01:00
dgtlmoon
e4fb5ab4da UI - Suggest adding proxy for watch when 403 access denied is reached (#1260) 2022-12-23 22:26:24 +01:00
dgtlmoon
e99f07a51d Filters & Notifications - fixed tokens in filter not found notification 2022-12-22 10:05:17 +01:00
dgtlmoon
08ee223b5f UI - Fix broken html tags in settings page 2022-12-20 18:57:26 +01:00
dgtlmoon
572f9b8a31 Proxy Settings in UI - TidyUp BrightData text 2022-12-20 10:08:16 +01:00
dgtlmoon
fcfd1b5e10 Ability to configure extra proxies via the UI (#1235) 2022-12-19 21:48:01 +01:00
dgtlmoon
0790dd555e Docker container updates - use Python 3.10, remove unused packages 2022-12-19 20:46:02 +01:00
dgtlmoon
0b20dc7712 Tidy up list icons a bit (#1250) 2022-12-19 20:30:32 +01:00
dgtlmoon
13c4121f52 PDF File change detection - Initial PDF fetcher support with basic text extraction (#1244) 2022-12-19 17:51:41 +01:00
dgtlmoon
e8e176f3bd Testing - Run test as fully built docker container (#1245) 2022-12-19 14:41:34 +01:00
dgtlmoon
7a1d2d924e Dark mode - system setting var is not required (its cookie based) 2022-12-19 14:13:57 +01:00
dgtlmoon
c3731cf055 0.40.0.3 2022-12-19 12:41:52 +01:00
dgtlmoon
a287e5a86c Visual Selector - Select smallest/most precise element first, better filtering of zero size elements 2022-12-19 12:33:31 +01:00
dgtlmoon
235535c327 Fetching - Check the most overdue watch first (#1242) 2022-12-17 15:40:57 +01:00
dgtlmoon
44dc62da2d Overview list - Checkbox action "Recheck" 2022-12-16 18:35:09 +01:00
dgtlmoon
0c380c170f Playwright - Better error reporting and re-try fetch on fail once (#1238) 2022-12-16 18:06:14 +01:00
dgtlmoon
b7a2501d64 Fetching - Always sort the key order of JSON content for less false alerts (May cause an alert on upgrade, but will be better going forwards) #1219 2022-12-15 09:13:09 +01:00
dgtlmoon
e970fef991 Fetcher + VisualSelector - xPath filter with attribute filter was breaking the element finder 2022-12-14 19:06:49 +01:00
dgtlmoon
b76148a0f4 Fetcher - CPU usage - Skip processing if the previous checksum and the just fetched one was the same (#925) 2022-12-14 15:08:34 +01:00
dgtlmoon
93cc30437f Playwright+BrowserSteps - Fetch changes - Fetch simply after page starts rendering + delay seconds, disable service workers 2022-12-14 12:16:04 +01:00
dgtlmoon
6562d6e0d4 Improve ARM/rust build comment 2022-12-13 12:28:20 +01:00
dgtlmoon
6c217cc3b6 README.md - Improving JSONPath example for LD+JSON product data 2022-12-11 11:14:52 +01:00
dgtlmoon
f30cdf0674 0.40.0.2 2022-12-08 22:36:59 +01:00
dgtlmoon
14da0646a7 Price follower - Dont scan for ldjson data when 'no' was clicked on the suggestion (#1207) 2022-12-08 22:35:37 +01:00
dgtlmoon
b413cdecc7 Adding missing parts for pip build Re #1206 2022-12-08 21:54:55 +01:00
dgtlmoon
7bf52d9275 0.40.0 2022-12-08 20:09:42 +01:00
dgtlmoon
09e6624afd VisualSelector - Exclude items that are not interactable or visible 2022-12-08 20:08:41 +01:00
dgtlmoon
b58fd995b5 Automatically offer to track LD+JSON product price data (#1204) 2022-12-08 19:28:20 +01:00
137 changed files with 6907 additions and 1578 deletions

View File

@@ -50,7 +50,6 @@ jobs:
python -m pip install --upgrade pip python -m pip install --upgrade pip
pip install flake8 pytest pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Create release metadata - name: Create release metadata
run: | run: |
@@ -99,6 +98,8 @@ jobs:
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7 platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache cache-to: type=local,dest=/tmp/.buildx-cache
# Looks like this was disabled
# provenance: false
# A new tagged release is required, which builds :tag and :latest # A new tagged release is required, which builds :tag and :latest
- name: Build and push :tag - name: Build and push :tag
@@ -117,6 +118,8 @@ jobs:
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7 platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache cache-to: type=local,dest=/tmp/.buildx-cache
# Looks like this was disabled
# provenance: false
- name: Image digest - name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }} run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}

View File

@@ -1,44 +0,0 @@
name: PyPi Test and Push tagged release
# Triggers the workflow on push or pull request events
on:
workflow_run:
workflows: ["ChangeDetection.io Test"]
tags: '*.*'
types: [completed]
jobs:
test-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
# - name: Install dependencies
# run: |
# python -m pip install --upgrade pip
# pip install flake8 pytest
# if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
# if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Test that pip builds without error
run: |
pip3 --version
python3 -m pip install wheel
python3 setup.py bdist_wheel
python3 -m pip install dist/changedetection.io-*-none-any.whl --force
changedetection.io -d /tmp -p 10000 &
sleep 3
curl http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
killall -9 changedetection.io
# https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md ?
# https://github.com/docker/buildx/issues/59 ? Needs to be one platform?
# https://github.com/docker/buildx/issues/495#issuecomment-918925854
#if: ${{ github.event_name == 'release'}}

View File

@@ -10,11 +10,13 @@ on:
paths: paths:
- requirements.txt - requirements.txt
- Dockerfile - Dockerfile
- .github/workflows/*
pull_request: pull_request:
paths: paths:
- requirements.txt - requirements.txt
- Dockerfile - Dockerfile
- .github/workflows/*
# Changes to requirements.txt packages and Dockerfile may or may not always be compatible with arm etc, so worth testing # Changes to requirements.txt packages and Dockerfile may or may not always be compatible with arm etc, so worth testing
# @todo: some kind of path filter for requirements.txt and Dockerfile # @todo: some kind of path filter for requirements.txt and Dockerfile

View File

@@ -8,32 +8,83 @@ jobs:
runs-on: ubuntu-latest runs-on: ubuntu-latest
steps: steps:
- uses: actions/checkout@v2 - uses: actions/checkout@v2
- name: Set up Python 3.9
# Mainly just for link/flake8
- name: Set up Python 3.10
uses: actions/setup-python@v2 uses: actions/setup-python@v2
with: with:
python-version: 3.9 python-version: '3.10'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Lint with flake8 - name: Lint with flake8
run: | run: |
pip3 install flake8
# stop the build if there are Python syntax errors or undefined names # stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide # exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Unit tests - name: Spin up ancillary testable services
run: | run: |
python3 -m unittest changedetectionio.tests.unit.test_notification_diff
docker network create changedet-network
- name: Test with pytest # Selenium+browserless
docker run --network changedet-network -d --hostname selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
docker run --network changedet-network -d --hostname browserless -e "FUNCTION_BUILT_INS=[\"fs\",\"crypto\"]" -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
- name: Build changedetection.io container for testing
run: |
# Build a changedetection.io container and start testing inside
docker build . -t test-changedetectionio
- name: Test built container with pytest
run: | run: |
# Each test is totally isolated and performs its own cleanup/reset
cd changedetectionio; ./run_all_tests.sh # Unit tests
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_notification_diff'
# All tests
docker run --network changedet-network test-changedetectionio bash -c 'cd changedetectionio && ./run_basic_tests.sh'
- name: Test built container selenium+browserless/playwright
run: |
# Selenium fetch
docker run --rm -e "WEBDRIVER_URL=http://selenium:4444/wd/hub" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py'
# Playwright/Browserless fetch
docker run --rm -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py && pytest tests/visualselector/test_fetch_data.py'
# Settings headers playwright tests - Call back in from Browserless, check headers
docker run --name "changedet" --hostname changedet --rm -e "FLASK_SERVER_NAME=changedet" -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000?dumpio=true" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/test_request.py'
docker run --name "changedet" --hostname changedet --rm -e "FLASK_SERVER_NAME=changedet" -e "WEBDRIVER_URL=http://selenium:4444/wd/hub" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/test_request.py'
docker run --name "changedet" --hostname changedet --rm -e "FLASK_SERVER_NAME=changedet" -e "USE_EXPERIMENTAL_PUPPETEER_FETCH=yes" -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000?dumpio=true" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/test_request.py'
# restock detection via playwright - added name=changedet here so that playwright/browserless can connect to it
docker run --rm --name "changedet" -e "FLASK_SERVER_NAME=changedet" -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-port=5004 --live-server-host=0.0.0.0 tests/restock/test_restock.py'
- name: Test with puppeteer fetcher and disk cache
run: |
docker run --rm -e "PUPPETEER_DISK_CACHE=/tmp/data/" -e "USE_EXPERIMENTAL_PUPPETEER_FETCH=yes" -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py && pytest tests/visualselector/test_fetch_data.py'
# Browserless would have had -e "FUNCTION_BUILT_INS=[\"fs\",\"crypto\"]" added above
- name: Test proxy interaction
run: |
cd changedetectionio
./run_proxy_tests.sh
cd ..
- name: Test changedetection.io container starts+runs basically without error
run: |
docker run -p 5556:5000 -d test-changedetectionio
sleep 3
# Should return 0 (no error) when grep finds it
curl -s http://localhost:5556 |grep -q checkbox-uuid
# and IPv6
curl -s -g -6 "http://[::1]:5556"|grep -q checkbox-uuid
#export WEBDRIVER_URL=http://localhost:4444/wd/hub
#pytest tests/fetchers/test_content.py
#pytest tests/test_errorhandling.py

36
.github/workflows/test-pip-build.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: ChangeDetection.io PIP package test
# Triggers the workflow on push or pull request events
# This line doesnt work, even tho it is the documented one
on: [push, pull_request]
# Changes to requirements.txt packages and Dockerfile may or may not always be compatible with arm etc, so worth testing
# @todo: some kind of path filter for requirements.txt and Dockerfile
jobs:
test-pip-build-basics:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Test that the basic pip built package runs without error
run: |
set -e
mkdir dist
pip3 install wheel
python3 setup.py bdist_wheel
pip3 install -r requirements.txt
rm ./changedetection.py
rm -rf changedetectio
pip3 install dist/changedetection.io*.whl
changedetection.io -d /tmp -p 10000 &
sleep 3
curl http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
killall -9 changedetection.io

View File

@@ -7,9 +7,3 @@ Otherwise, it's always best to PR into the `dev` branch.
Please be sure that all new functionality has a matching test! Please be sure that all new functionality has a matching test!
Use `pytest` to validate/test, you can run the existing tests as `pytest tests/test_notification.py` for example Use `pytest` to validate/test, you can run the existing tests as `pytest tests/test_notification.py` for example
```
pip3 install -r requirements-dev
```
this is from https://github.com/dgtlmoon/changedetection.io/blob/master/requirements-dev.txt

View File

@@ -1,7 +1,7 @@
# pip dependencies install stage # pip dependencies install stage
FROM python:3.8-slim as builder FROM python:3.10-slim-bookworm as builder
# rustc compiler would be needed on ARM type devices but theres an issue with some deps not building.. # See `cryptography` pin comment in requirements.txt
ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1 ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
@@ -29,22 +29,16 @@ RUN pip install --target=/dependencies playwright~=1.27.1 \
|| echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled." || echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled."
# Final image stage # Final image stage
FROM python:3.8-slim FROM python:3.10-slim-bookworm
# Actual packages needed at runtime, usually due to the notification (apprise) backend
# rustc compiler would be needed on ARM type devices but theres an issue with some deps not building..
ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1
# Re #93, #73, excluding rustc (adds another 430Mb~)
RUN apt-get update && apt-get install -y --no-install-recommends \ RUN apt-get update && apt-get install -y --no-install-recommends \
g++ \ libssl3 \
gcc \ libxslt1.1 \
libc-dev \ # For pdftohtml
libffi-dev \ poppler-utils \
libjpeg-dev \ zlib1g \
libssl-dev \ && apt-get clean && rm -rf /var/lib/apt/lists/*
libxslt-dev \
zlib1g-dev
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops # https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1 ENV PYTHONUNBUFFERED=1

View File

@@ -1,9 +1,11 @@
recursive-include changedetectionio/api * recursive-include changedetectionio/api *
recursive-include changedetectionio/templates * recursive-include changedetectionio/blueprint *
recursive-include changedetectionio/static *
recursive-include changedetectionio/model * recursive-include changedetectionio/model *
recursive-include changedetectionio/tests * recursive-include changedetectionio/processors *
recursive-include changedetectionio/res * recursive-include changedetectionio/res *
recursive-include changedetectionio/static *
recursive-include changedetectionio/templates *
recursive-include changedetectionio/tests *
prune changedetectionio/static/package-lock.json prune changedetectionio/static/package-lock.json
prune changedetectionio/static/styles/node_modules prune changedetectionio/static/styles/node_modules
prune changedetectionio/static/styles/package-lock.json prune changedetectionio/static/styles/package-lock.json

View File

@@ -2,10 +2,10 @@
Live your data-life pro-actively, track website content changes and receive notifications via Discord, Email, Slack, Telegram and 70+ more Live your data-life pro-actively, track website content changes and receive notifications via Discord, Email, Slack, Telegram and 70+ more
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start?src=pip) [<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://changedetection.io)
[**Don't have time? Let us host it for you! try our extremely affordable subscription use our proxies and support!**](https://lemonade.changedetection.io/start) [**Don't have time? Let us host it for you! try our extremely affordable subscription use our proxies and support!**](https://changedetection.io)
#### Example use cases #### Example use cases

View File

@@ -1,15 +1,17 @@
## Web Site Change Detection, Monitoring and Notification. ## Web Site Change Detection, Restock monitoring and notifications.
_Live your data-life pro-actively, Detect website changes and perform meaningful actions, trigger notifications via Discord, Email, Slack, Telegram, API calls and many more._ **_Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more._**
_Live your data-life pro-actively._
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start?src=github) [<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://changedetection.io?src=github)
[![Release Version][release-shield]][release-link] [![Docker Pulls][docker-pulls]][docker-link] [![License][license-shield]](LICENSE.md) [![Release Version][release-shield]][release-link] [![Docker Pulls][docker-pulls]][docker-link] [![License][license-shield]](LICENSE.md)
![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master) ![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master)
[**Don't have time? Let us host it for you! try our $6.99/month subscription - use our proxies and support!**](https://lemonade.changedetection.io/start) , _half the price of other website change monitoring services and comes with unlimited watches & checks!_ [**Don't have time? Let us host it for you! try our $8.99/month subscription - use our proxies and support!**](https://changedetection.io) , _half the price of other website change monitoring services!_
- Chrome browser included. - Chrome browser included.
- Super fast, no registration needed setup. - Super fast, no registration needed setup.
@@ -20,11 +22,11 @@ _Live your data-life pro-actively, Detect website changes and perform meaningful
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service) Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://lemonade.changedetection.io/start?src=github) [<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://changedetection.io?src=github)
### Easily see what changed, examine by word, line, or individual character. ### Easily see what changed, examine by word, line, or individual character.
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://lemonade.changedetection.io/start?src=github) [<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://changedetection.io?src=github)
### Perform interactive browser steps ### Perform interactive browser steps
@@ -33,7 +35,7 @@ Fill in text boxes, click buttons and more, setup your changedetection scenario.
Using the **Browser Steps** configuration, add basic steps before performing change detection, such as logging into websites, adding a product to a cart, accept cookie logins, entering dates and refining searches. Using the **Browser Steps** configuration, add basic steps before performing change detection, such as logging into websites, adding a product to a cart, accept cookie logins, entering dates and refining searches.
[<img src="docs/browsersteps-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Website change detection with interactive browser steps, login, cookies etc" />](https://lemonade.changedetection.io/start?src=github) [<img src="docs/browsersteps-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Website change detection with interactive browser steps, login, cookies etc" />](https://changedetection.io?src=github)
After **Browser Steps** have been run, then visit the **Visual Selector** tab to refine the content you're interested in. After **Browser Steps** have been run, then visit the **Visual Selector** tab to refine the content you're interested in.
Requires Playwright to be enabled. Requires Playwright to be enabled.
@@ -43,9 +45,11 @@ Requires Playwright to be enabled.
- Products and services have a change in pricing - Products and services have a change in pricing
- _Out of stock notification_ and _Back In stock notification_ - _Out of stock notification_ and _Back In stock notification_
- Monitor and track PDF file changes, know when a PDF file has text changes.
- Governmental department updates (changes are often only on their websites) - Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list. - New software releases, security advisories when you're not on their mailing list.
- Festivals with changes - Festivals with changes
- Discogs restock alerts and monitoring
- Realestate listing changes - Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else - Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites - COVID related news from government websites
@@ -60,6 +64,9 @@ Requires Playwright to be enabled.
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product) - You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
- Get notified when certain keywords appear in Twitter search results - Get notified when certain keywords appear in Twitter search results
- Proactively search for jobs, get notified when companies update their careers page, search job portals for keywords. - Proactively search for jobs, get notified when companies update their careers page, search job portals for keywords.
- Get alerts when new job positions are open on Bamboo HR and other job platforms
- Website defacement monitoring
- Pokémon Card Restock Tracker / Pokémon TCG Tracker
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_ _Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_
@@ -68,6 +75,7 @@ _Need an actual Chrome runner with Javascript support? We support fetching via W
- Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions! - Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions!
- Target elements with xPath and CSS Selectors, Easily monitor complex JSON with JSONPath or jq - Target elements with xPath and CSS Selectors, Easily monitor complex JSON with JSONPath or jq
- Switch between fast non-JS and Chrome JS based "fetchers" - Switch between fast non-JS and Chrome JS based "fetchers"
- Track changes in PDF files (Monitor text changed in the PDF, Also monitor PDF filesize and checksums)
- Easily specify how often a site should be checked - Easily specify how often a site should be checked
- Execute JS before extracting text (Good for logging in, see examples in the UI!) - Execute JS before extracting text (Good for logging in, see examples in the UI!)
- Override Request Headers, Specify `POST` or `GET` and other methods - Override Request Headers, Specify `POST` or `GET` and other methods
@@ -96,6 +104,8 @@ $ docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/d
`:latest` tag is our latest stable release, `:dev` tag is our bleeding edge `master` branch. `:latest` tag is our latest stable release, `:dev` tag is our bleeding edge `master` branch.
Alternative docker repository over at ghcr - [ghcr.io/dgtlmoon/changedetection.io](https://ghcr.io/dgtlmoon/changedetection.io)
### Windows ### Windows
See the install instructions at the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Microsoft-Windows See the install instructions at the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Microsoft-Windows
@@ -135,7 +145,7 @@ See the wiki for more information https://github.com/dgtlmoon/changedetection.io
## Filters ## Filters
XPath, JSONPath, jq, and CSS support comes baked in! You can be as specific as you need, use XPath exported from various XPath element query creation tools. XPath, JSONPath, jq, and CSS support comes baked in! You can be as specific as you need, use XPath exported from various XPath element query creation tools.
(We support LXML `re:test`, `re:math` and `re:replace`.) (We support LXML `re:test`, `re:match` and `re:replace`.)
## Notifications ## Notifications
@@ -187,11 +197,29 @@ When you enable a `json:` or `jq:` filter, you can even automatically extract an
<html> <html>
... ...
<script type="application/ld+json"> <script type="application/ld+json">
{"@context":"http://schema.org","@type":"Product","name":"Nan Optipro Stage 1 Baby Formula 800g","price": 23.50 }
{
"@context":"http://schema.org/",
"@type":"Product",
"offers":{
"@type":"Offer",
"availability":"http://schema.org/InStock",
"price":"3949.99",
"priceCurrency":"USD",
"url":"https://www.newegg.com/p/3D5-000D-001T1"
},
"description":"Cobratype King Cobra Hero Desktop Gaming PC",
"name":"Cobratype King Cobra Hero Desktop Gaming PC",
"sku":"3D5-000D-001T1",
"itemCondition":"NewCondition"
}
</script> </script>
``` ```
`json:$.price` or `jq:.price` would give `23.50`, or you can extract the whole structure `json:$..price` or `jq:..price` would give `3949.99`, or you can extract the whole structure (use a JSONpath test website to validate with)
The application also supports notifying you that it can follow this information automatically
## Proxy Configuration ## Proxy Configuration
@@ -201,13 +229,16 @@ See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configura
Raspberry Pi and linux/arm/v6 linux/arm/v7 arm64 devices are supported! See the wiki for [details](https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver) Raspberry Pi and linux/arm/v6 linux/arm/v7 arm64 devices are supported! See the wiki for [details](https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver)
## API Support
Supports managing the website watch list [via our API](https://changedetection.io/docs/api_v1/index.html)
## Support us ## Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you. Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Firstly, consider taking out a [change detection monthly subscription - unlimited checks and watches](https://lemonade.changedetection.io/start) , even if you don't use it, you still get the warm fuzzy feeling of helping out the project. (And who knows, you might just use it!) Firstly, consider taking out a [change detection monthly subscription - unlimited checks and watches](https://changedetection.io?src=github) , even if you don't use it, you still get the warm fuzzy feeling of helping out the project. (And who knows, you might just use it!)
Or directly donate an amount PayPal [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/donate/?hosted_button_id=7CP6HR9ZCNDYJ) Or directly donate an amount PayPal [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/donate/?hosted_button_id=7CP6HR9ZCNDYJ)
@@ -225,5 +256,5 @@ I offer commercial support, this software is depended on by network security, ae
[test-shield]: https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master [test-shield]: https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master
[license-shield]: https://img.shields.io/github/license/dgtlmoon/changedetection.io.svg?style=for-the-badge [license-shield]: https://img.shields.io/github/license/dgtlmoon/changedetection.io.svg?style=for-the-badge
[release-link]: https://github.com/dgtlmoon.com/changedetection.io/releases [release-link]: https://github.com/dgtlmoon/changedetection.io/releases
[docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io [docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io

View File

@@ -7,7 +7,7 @@
from changedetectionio import changedetection from changedetectionio import changedetection
import multiprocessing import multiprocessing
import signal import sys
import os import os
def sigchld_handler(_signo, _stack_frame): def sigchld_handler(_signo, _stack_frame):
@@ -35,6 +35,9 @@ if __name__ == '__main__':
try: try:
while True: while True:
time.sleep(1) time.sleep(1)
if not parse_process.is_alive():
# Process died/crashed for some reason, exit with error set
sys.exit(1)
except KeyboardInterrupt: except KeyboardInterrupt:
#parse_process.terminate() not needed, because this process will issue it to the sub-process anyway #parse_process.terminate() not needed, because this process will issue it to the sub-process anyway

View File

@@ -1,5 +1,15 @@
#!/usr/bin/python3 #!/usr/bin/python3
from changedetectionio import queuedWatchMetaData
from copy import deepcopy
from distutils.util import strtobool
from feedgen.feed import FeedGenerator
from flask_compress import Compress as FlaskCompress
from flask_login import current_user
from flask_restful import abort, Api
from flask_wtf import CSRFProtect
from functools import wraps
from threading import Event
import datetime import datetime
import flask_login import flask_login
import logging import logging
@@ -10,11 +20,6 @@ import threading
import time import time
import timeago import timeago
from copy import deepcopy
from distutils.util import strtobool
from feedgen.feed import FeedGenerator
from threading import Event
from flask import ( from flask import (
Flask, Flask,
abort, abort,
@@ -27,15 +32,13 @@ from flask import (
session, session,
url_for, url_for,
) )
from flask_compress import Compress as FlaskCompress
from flask_login import login_required from flask_paginate import Pagination, get_page_parameter
from flask_restful import abort, Api
from flask_wtf import CSRFProtect
from changedetectionio import html_tools from changedetectionio import html_tools
from changedetectionio.api import api_v1 from changedetectionio.api import api_v1
__version__ = '0.39.22.1' __version__ = '0.42.3'
datastore = None datastore = None
@@ -52,7 +55,6 @@ app = Flask(__name__,
static_url_path="", static_url_path="",
static_folder="static", static_folder="static",
template_folder="templates") template_folder="templates")
from flask_compress import Compress
# Super handy for compressing large BrowserSteps responses and others # Super handy for compressing large BrowserSteps responses and others
FlaskCompress(app) FlaskCompress(app)
@@ -64,7 +66,8 @@ app.config.exit = Event()
app.config['NEW_VERSION_AVAILABLE'] = False app.config['NEW_VERSION_AVAILABLE'] = False
app.config['LOGIN_DISABLED'] = False if os.getenv('FLASK_SERVER_NAME'):
app.config['SERVER_NAME'] = os.getenv('FLASK_SERVER_NAME')
#app.config["EXPLAIN_TEMPLATE_LOADING"] = True #app.config["EXPLAIN_TEMPLATE_LOADING"] = True
@@ -73,7 +76,6 @@ app.config['TEMPLATES_AUTO_RELOAD'] = True
app.jinja_env.add_extension('jinja2.ext.loopcontrols') app.jinja_env.add_extension('jinja2.ext.loopcontrols')
csrf = CSRFProtect() csrf = CSRFProtect()
csrf.init_app(app) csrf.init_app(app)
notification_debug_log=[] notification_debug_log=[]
watch_api = Api(app, decorators=[csrf.exempt]) watch_api = Api(app, decorators=[csrf.exempt])
@@ -122,6 +124,15 @@ def _jinja2_filter_datetimestamp(timestamp, format="%Y-%m-%d %H:%M:%S"):
return timeago.format(timestamp, time.time()) return timeago.format(timestamp, time.time())
@app.template_filter('pagination_slice')
def _jinja2_filter_pagination_slice(arr, skip):
per_page = datastore.data['settings']['application'].get('pager_size', 50)
if per_page:
return arr[skip:skip + per_page]
return arr
@app.template_filter('format_seconds_ago') @app.template_filter('format_seconds_ago')
def _jinja2_filter_seconds_precise(timestamp): def _jinja2_filter_seconds_precise(timestamp):
if timestamp == False: if timestamp == False:
@@ -148,7 +159,6 @@ class User(flask_login.UserMixin):
# Compare given password against JSON store or Env var # Compare given password against JSON store or Env var
def check_password(self, password): def check_password(self, password):
import base64 import base64
import hashlib import hashlib
@@ -156,11 +166,9 @@ class User(flask_login.UserMixin):
raw_salt_pass = os.getenv("SALTED_PASS", False) raw_salt_pass = os.getenv("SALTED_PASS", False)
if not raw_salt_pass: if not raw_salt_pass:
raw_salt_pass = datastore.data['settings']['application']['password'] raw_salt_pass = datastore.data['settings']['application'].get('password')
raw_salt_pass = base64.b64decode(raw_salt_pass) raw_salt_pass = base64.b64decode(raw_salt_pass)
salt_from_storage = raw_salt_pass[:32] # 32 is the length of the salt salt_from_storage = raw_salt_pass[:32] # 32 is the length of the salt
# Use the exact same setup you used to generate the key, but this time put in the password to check # Use the exact same setup you used to generate the key, but this time put in the password to check
@@ -170,21 +178,44 @@ class User(flask_login.UserMixin):
salt_from_storage, salt_from_storage,
100000 100000
) )
new_key = salt_from_storage + new_key new_key = salt_from_storage + new_key
return new_key == raw_salt_pass return new_key == raw_salt_pass
pass pass
def login_optionally_required(func):
@wraps(func)
def decorated_view(*args, **kwargs):
has_password_enabled = datastore.data['settings']['application'].get('password') or os.getenv("SALTED_PASS", False)
# Permitted
if request.endpoint == 'static_content' and request.view_args['group'] == 'styles':
return func(*args, **kwargs)
# Permitted
elif request.endpoint == 'diff_history_page' and datastore.data['settings']['application'].get('shared_diff_access'):
return func(*args, **kwargs)
elif request.method in flask_login.config.EXEMPT_METHODS:
return func(*args, **kwargs)
elif app.config.get('LOGIN_DISABLED'):
return func(*args, **kwargs)
elif has_password_enabled and not current_user.is_authenticated:
return app.login_manager.unauthorized()
return func(*args, **kwargs)
return decorated_view
def changedetection_app(config=None, datastore_o=None): def changedetection_app(config=None, datastore_o=None):
global datastore global datastore
datastore = datastore_o datastore = datastore_o
# so far just for read-only via tests, but this will be moved eventually to be the main source # so far just for read-only via tests, but this will be moved eventually to be the main source
# (instead of the global var) # (instead of the global var)
app.config['DATASTORE']=datastore_o app.config['DATASTORE'] = datastore_o
#app.config.update(config or {})
login_manager = flask_login.LoginManager(app) login_manager = flask_login.LoginManager(app)
login_manager.login_view = 'login' login_manager.login_view = 'login'
@@ -212,6 +243,8 @@ def changedetection_app(config=None, datastore_o=None):
# https://flask-cors.readthedocs.io/en/latest/ # https://flask-cors.readthedocs.io/en/latest/
# CORS(app) # CORS(app)
@login_manager.user_loader @login_manager.user_loader
def user_loader(email): def user_loader(email):
user = User() user = User()
@@ -220,7 +253,7 @@ def changedetection_app(config=None, datastore_o=None):
@login_manager.unauthorized_handler @login_manager.unauthorized_handler
def unauthorized_handler(): def unauthorized_handler():
# @todo validate its a URL of this host and use that flash("You must be logged in, please log in.", 'error')
return redirect(url_for('login', next=url_for('index'))) return redirect(url_for('login', next=url_for('index')))
@app.route('/logout') @app.route('/logout')
@@ -233,10 +266,6 @@ def changedetection_app(config=None, datastore_o=None):
@app.route('/login', methods=['GET', 'POST']) @app.route('/login', methods=['GET', 'POST'])
def login(): def login():
if not datastore.data['settings']['application']['password'] and not os.getenv("SALTED_PASS", False):
flash("Login not required, no password enabled.", "notice")
return redirect(url_for('index'))
if request.method == 'GET': if request.method == 'GET':
if flask_login.current_user.is_authenticated: if flask_login.current_user.is_authenticated:
flash("Already logged in") flash("Already logged in")
@@ -271,27 +300,22 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('login')) return redirect(url_for('login'))
@app.before_request @app.before_request
def do_something_whenever_a_request_comes_in(): def before_request_handle_cookie_x_settings():
# Disable password login if there is not one set
# (No password in settings or env var)
app.config['LOGIN_DISABLED'] = datastore.data['settings']['application']['password'] == False and os.getenv("SALTED_PASS", False) == False
# Set the auth cookie path if we're running as X-settings/X-Forwarded-Prefix # Set the auth cookie path if we're running as X-settings/X-Forwarded-Prefix
if os.getenv('USE_X_SETTINGS') and 'X-Forwarded-Prefix' in request.headers: if os.getenv('USE_X_SETTINGS') and 'X-Forwarded-Prefix' in request.headers:
app.config['REMEMBER_COOKIE_PATH'] = request.headers['X-Forwarded-Prefix'] app.config['REMEMBER_COOKIE_PATH'] = request.headers['X-Forwarded-Prefix']
app.config['SESSION_COOKIE_PATH'] = request.headers['X-Forwarded-Prefix'] app.config['SESSION_COOKIE_PATH'] = request.headers['X-Forwarded-Prefix']
# For the RSS path, allow access via a token return None
if request.path == '/rss' and request.args.get('token'):
app_rss_token = datastore.data['settings']['application']['rss_access_token']
rss_url_token = request.args.get('token')
if app_rss_token == rss_url_token:
app.config['LOGIN_DISABLED'] = True
@app.route("/rss", methods=['GET']) @app.route("/rss", methods=['GET'])
@login_required
def rss(): def rss():
# Always requires token set
app_rss_token = datastore.data['settings']['application'].get('rss_access_token')
rss_url_token = request.args.get('token')
if rss_url_token != app_rss_token:
return "Access denied, bad token", 403
from . import diff from . import diff
limit_tag = request.args.get('tag') limit_tag = request.args.get('tag')
@@ -327,8 +351,6 @@ def changedetection_app(config=None, datastore_o=None):
if len(dates) < 2: if len(dates) < 2:
continue continue
prev_fname = watch.history[dates[-2]]
if not watch.viewed: if not watch.viewed:
# Re #239 - GUID needs to be individual for each event # Re #239 - GUID needs to be individual for each event
# @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228) # @todo In the future make this a configurable link back (see work on BASE_URL https://github.com/dgtlmoon/changedetection.io/pull/228)
@@ -349,9 +371,12 @@ def changedetection_app(config=None, datastore_o=None):
watch_title = watch.get('title') if watch.get('title') else watch.get('url') watch_title = watch.get('title') if watch.get('title') else watch.get('url')
fe.title(title=watch_title) fe.title(title=watch_title)
latest_fname = watch.history[dates[-1]]
html_diff = diff.render_diff(prev_fname, latest_fname, include_equal=False, line_feed_sep="</br>") html_diff = diff.render_diff(previous_version_file_contents=watch.get_history_snapshot(dates[-2]),
newest_version_file_contents=watch.get_history_snapshot(dates[-1]),
include_equal=False,
line_feed_sep="<br>")
fe.content(content="<html><body><h4>{}</h4>{}</body></html>".format(watch_title, html_diff), fe.content(content="<html><body><h4>{}</h4>{}</body></html>".format(watch_title, html_diff),
type='CDATA') type='CDATA')
@@ -365,7 +390,7 @@ def changedetection_app(config=None, datastore_o=None):
return response return response
@app.route("/", methods=['GET']) @app.route("/", methods=['GET'])
@login_required @login_optionally_required
def index(): def index():
from changedetectionio import forms from changedetectionio import forms
@@ -378,55 +403,88 @@ def changedetection_app(config=None, datastore_o=None):
if op: if op:
uuid = request.args.get('uuid') uuid = request.args.get('uuid')
if op == 'pause': if op == 'pause':
datastore.data['watching'][uuid]['paused'] ^= True datastore.data['watching'][uuid].toggle_pause()
elif op == 'mute': elif op == 'mute':
datastore.data['watching'][uuid]['notification_muted'] ^= True datastore.data['watching'][uuid].toggle_mute()
datastore.needs_write = True datastore.needs_write = True
return redirect(url_for('index', tag = limit_tag)) return redirect(url_for('index', tag = limit_tag))
# Sort by last_changed and add the uuid which is usually the key.. # Sort by last_changed and add the uuid which is usually the key..
sorted_watches = [] sorted_watches = []
search_q = request.args.get('q').strip().lower() if request.args.get('q') else False
for uuid, watch in datastore.data['watching'].items(): for uuid, watch in datastore.data['watching'].items():
if limit_tag != None: if limit_tag:
# Support for comma separated list of tags. # Support for comma separated list of tags.
if watch['tag'] is None: if not watch.get('tag'):
continue continue
for tag_in_watch in watch['tag'].split(','): for tag_in_watch in watch.get('tag', '').split(','):
tag_in_watch = tag_in_watch.strip() tag_in_watch = tag_in_watch.strip()
if tag_in_watch == limit_tag: if tag_in_watch == limit_tag:
watch['uuid'] = uuid watch['uuid'] = uuid
sorted_watches.append(watch) if search_q:
if (watch.get('title') and search_q in watch.get('title').lower()) or search_q in watch.get('url', '').lower():
sorted_watches.append(watch)
else:
sorted_watches.append(watch)
else: else:
watch['uuid'] = uuid #watch['uuid'] = uuid
sorted_watches.append(watch) if search_q:
if (watch.get('title') and search_q in watch.get('title').lower()) or search_q in watch.get('url', '').lower():
sorted_watches.append(watch)
else:
sorted_watches.append(watch)
existing_tags = datastore.get_all_tags() existing_tags = datastore.get_all_tags()
form = forms.quickWatchForm(request.form) form = forms.quickWatchForm(request.form)
output = render_template("watch-overview.html", page = request.args.get(get_page_parameter(), type=int, default=1)
form=form, total_count = len(sorted_watches)
watches=sorted_watches,
tags=existing_tags, pagination = Pagination(page=page,
total=total_count,
per_page=datastore.data['settings']['application'].get('pager_size', 50), css_framework="semantic")
output = render_template(
"watch-overview.html",
# Don't link to hosting when we're on the hosting environment
active_tag=limit_tag, active_tag=limit_tag,
app_rss_token=datastore.data['settings']['application']['rss_access_token'], app_rss_token=datastore.data['settings']['application']['rss_access_token'],
has_unviewed=datastore.has_unviewed, form=form,
# Don't link to hosting when we're on the hosting environment
hosted_sticky=os.getenv("SALTED_PASS", False) == False,
guid=datastore.data['app_guid'], guid=datastore.data['app_guid'],
queued_uuids=[uuid for p,uuid in update_q.queue]) has_proxies=datastore.proxy_list,
has_unviewed=datastore.has_unviewed,
hosted_sticky=os.getenv("SALTED_PASS", False) == False,
pagination=pagination,
queued_uuids=[q_uuid.item['uuid'] for q_uuid in update_q.queue],
search_q=request.args.get('q','').strip(),
sort_attribute=request.args.get('sort') if request.args.get('sort') else request.cookies.get('sort'),
sort_order=request.args.get('order') if request.args.get('order') else request.cookies.get('order'),
system_default_fetcher=datastore.data['settings']['application'].get('fetch_backend'),
tags=existing_tags,
watches=sorted_watches
)
if session.get('share-link'): if session.get('share-link'):
del(session['share-link']) del(session['share-link'])
return output
resp = make_response(output)
# The template can run on cookie or url query info
if request.args.get('sort'):
resp.set_cookie('sort', request.args.get('sort'))
if request.args.get('order'):
resp.set_cookie('order', request.args.get('order'))
return resp
# AJAX endpoint for sending a test # AJAX endpoint for sending a test
@app.route("/notification/send-test", methods=['POST']) @app.route("/notification/send-test", methods=['POST'])
@login_required @login_optionally_required
def ajax_callback_send_notification_test(): def ajax_callback_send_notification_test():
import apprise import apprise
@@ -446,11 +504,19 @@ def changedetection_app(config=None, datastore_o=None):
try: try:
n_object = {'watch_url': request.form['window_url'], n_object = {'watch_url': request.form['window_url'],
'notification_urls': request.form['notification_urls'].splitlines(), 'notification_urls': request.form['notification_urls'].splitlines()
'notification_title': request.form['notification_title'].strip(),
'notification_body': request.form['notification_body'].strip(),
'notification_format': request.form['notification_format'].strip()
} }
# Only use if present, if not set in n_object it should use the default system value
if 'notification_format' in request.form and request.form['notification_format'].strip():
n_object['notification_format'] = request.form.get('notification_format', '').strip()
if 'notification_title' in request.form and request.form['notification_title'].strip():
n_object['notification_title'] = request.form.get('notification_title', '').strip()
if 'notification_body' in request.form and request.form['notification_body'].strip():
n_object['notification_body'] = request.form.get('notification_body', '').strip()
notification_q.put(n_object) notification_q.put(n_object)
except Exception as e: except Exception as e:
return make_response({'error': str(e)}, 400) return make_response({'error': str(e)}, 400)
@@ -459,7 +525,7 @@ def changedetection_app(config=None, datastore_o=None):
@app.route("/clear_history/<string:uuid>", methods=['GET']) @app.route("/clear_history/<string:uuid>", methods=['GET'])
@login_required @login_optionally_required
def clear_watch_history(uuid): def clear_watch_history(uuid):
try: try:
datastore.clear_watch_history(uuid) datastore.clear_watch_history(uuid)
@@ -471,7 +537,7 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/clear_history", methods=['GET', 'POST']) @app.route("/clear_history", methods=['GET', 'POST'])
@login_required @login_optionally_required
def clear_all_history(): def clear_all_history():
if request.method == 'POST': if request.method == 'POST':
@@ -492,49 +558,15 @@ def changedetection_app(config=None, datastore_o=None):
output = render_template("clear_all_history.html") output = render_template("clear_all_history.html")
return output return output
# If they edited an existing watch, we need to know to reset the current/previous md5 to include
# the excluded text.
def get_current_checksum_include_ignore_text(uuid):
import hashlib
from changedetectionio import fetch_site_status
# Get the most recent one
newest_history_key = datastore.data['watching'][uuid].get('newest_history_key')
# 0 means that theres only one, so that there should be no 'unviewed' history available
if newest_history_key == 0:
newest_history_key = list(datastore.data['watching'][uuid].history.keys())[0]
if newest_history_key:
with open(datastore.data['watching'][uuid].history[newest_history_key],
encoding='utf-8') as file:
raw_content = file.read()
handler = fetch_site_status.perform_site_check(datastore=datastore)
stripped_content = html_tools.strip_ignore_text(raw_content,
datastore.data['watching'][uuid]['ignore_text'])
if datastore.data['settings']['application'].get('ignore_whitespace', False):
checksum = hashlib.md5(stripped_content.translate(None, b'\r\n\t ')).hexdigest()
else:
checksum = hashlib.md5(stripped_content).hexdigest()
return checksum
return datastore.data['watching'][uuid]['previous_md5']
@app.route("/edit/<string:uuid>", methods=['GET', 'POST']) @app.route("/edit/<string:uuid>", methods=['GET', 'POST'])
@login_required @login_optionally_required
# https://stackoverflow.com/questions/42984453/wtforms-populate-form-with-data-if-data-exists # https://stackoverflow.com/questions/42984453/wtforms-populate-form-with-data-if-data-exists
# https://wtforms.readthedocs.io/en/3.0.x/forms/#wtforms.form.Form.populate_obj ? # https://wtforms.readthedocs.io/en/3.0.x/forms/#wtforms.form.Form.populate_obj ?
def edit_page(uuid): def edit_page(uuid):
from changedetectionio import forms from . import forms
from changedetectionio.blueprint.browser_steps.browser_steps import browser_step_ui_config from .blueprint.browser_steps.browser_steps import browser_step_ui_config
from . import processors
using_default_check_time = True using_default_check_time = True
# More for testing, possible to return the first/only # More for testing, possible to return the first/only
@@ -549,6 +581,15 @@ def changedetection_app(config=None, datastore_o=None):
flash("No watch with the UUID %s found." % (uuid), "error") flash("No watch with the UUID %s found." % (uuid), "error")
return redirect(url_for('index')) return redirect(url_for('index'))
switch_processor = request.args.get('switch_processor')
if switch_processor:
for p in processors.available_processors():
if p[0] == switch_processor:
datastore.data['watching'][uuid]['processor'] = switch_processor
flash(f"Switched to mode - {p[1]}.")
datastore.clear_watch_history(uuid)
redirect(url_for('edit_page', uuid=uuid))
# be sure we update with a copy instead of accidently editing the live object by reference # be sure we update with a copy instead of accidently editing the live object by reference
default = deepcopy(datastore.data['watching'][uuid]) default = deepcopy(datastore.data['watching'][uuid])
@@ -568,6 +609,8 @@ def changedetection_app(config=None, datastore_o=None):
data=default, data=default,
) )
form.fetch_backend.choices.append(("system", 'System settings default'))
# form.browser_steps[0] can be assumed that we 'goto url' first # form.browser_steps[0] can be assumed that we 'goto url' first
if datastore.proxy_list is None: if datastore.proxy_list is None:
@@ -580,6 +623,7 @@ def changedetection_app(config=None, datastore_o=None):
if request.method == 'POST' and form.validate(): if request.method == 'POST' and form.validate():
extra_update_obj = {} extra_update_obj = {}
if request.args.get('unpause_on_save'): if request.args.get('unpause_on_save'):
@@ -596,29 +640,26 @@ def changedetection_app(config=None, datastore_o=None):
using_default_check_time = False using_default_check_time = False
break break
# Use the default if its the same as system wide
if form.fetch_backend.data == datastore.data['settings']['application']['fetch_backend']:
extra_update_obj['fetch_backend'] = None
# Ignore text # Ignore text
form_ignore_text = form.ignore_text.data form_ignore_text = form.ignore_text.data
datastore.data['watching'][uuid]['ignore_text'] = form_ignore_text datastore.data['watching'][uuid]['ignore_text'] = form_ignore_text
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
if form_ignore_text:
if len(datastore.data['watching'][uuid].history):
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
if form.include_filters.data != datastore.data['watching'][uuid].get('include_filters', []):
if len(datastore.data['watching'][uuid].history):
extra_update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
# Be sure proxy value is None # Be sure proxy value is None
if datastore.proxy_list is not None and form.data['proxy'] == '': if datastore.proxy_list is not None and form.data['proxy'] == '':
extra_update_obj['proxy'] = None extra_update_obj['proxy'] = None
# Unsetting all filter_text methods should make it go back to default
# This particularly affects tests running
if 'filter_text_added' in form.data and not form.data.get('filter_text_added') \
and 'filter_text_replaced' in form.data and not form.data.get('filter_text_replaced') \
and 'filter_text_removed' in form.data and not form.data.get('filter_text_removed'):
extra_update_obj['filter_text_added'] = True
extra_update_obj['filter_text_replaced'] = True
extra_update_obj['filter_text_removed'] = True
datastore.data['watching'][uuid].update(form.data) datastore.data['watching'][uuid].update(form.data)
datastore.data['watching'][uuid].update(extra_update_obj) datastore.data['watching'][uuid].update(extra_update_obj)
@@ -632,7 +673,7 @@ def changedetection_app(config=None, datastore_o=None):
datastore.needs_write_urgent = True datastore.needs_write_urgent = True
# Queue the watch for immediate recheck, with a higher priority # Queue the watch for immediate recheck, with a higher priority
update_q.put((1, uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': False}))
# Diff page [edit] link should go back to diff page # Diff page [edit] link should go back to diff page
if request.args.get("next") and request.args.get("next") == 'diff': if request.args.get("next") and request.args.get("next") == 'diff':
@@ -646,8 +687,6 @@ def changedetection_app(config=None, datastore_o=None):
visualselector_data_is_ready = datastore.visualselector_data_is_ready(uuid) visualselector_data_is_ready = datastore.visualselector_data_is_ready(uuid)
# Only works reliably with Playwright
visualselector_enabled = os.getenv('PLAYWRIGHT_DRIVER_URL', False) and default['fetch_backend'] == 'html_webdriver'
# JQ is difficult to install on windows and must be manually added (outside requirements.txt) # JQ is difficult to install on windows and must be manually added (outside requirements.txt)
jq_support = True jq_support = True
@@ -658,16 +697,23 @@ def changedetection_app(config=None, datastore_o=None):
watch = datastore.data['watching'].get(uuid) watch = datastore.data['watching'].get(uuid)
system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver' system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver'
is_html_webdriver = True if watch.get('fetch_backend') == 'html_webdriver' or (
watch.get('fetch_backend', None) is None and system_uses_webdriver) else False is_html_webdriver = False
if (watch.get('fetch_backend') == 'system' and system_uses_webdriver) or watch.get('fetch_backend') == 'html_webdriver':
is_html_webdriver = True
# Only works reliably with Playwright
visualselector_enabled = os.getenv('PLAYWRIGHT_DRIVER_URL', False) and is_html_webdriver
output = render_template("edit.html", output = render_template("edit.html",
available_processors=processors.available_processors(),
browser_steps_config=browser_step_ui_config, browser_steps_config=browser_step_ui_config,
current_base_url=datastore.data['settings']['application']['base_url'], current_base_url=datastore.data['settings']['application']['base_url'],
emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False), emailprefix=os.getenv('NOTIFICATION_MAIL_BUTTON_PREFIX', False),
form=form, form=form,
has_default_notification_urls=True if len(datastore.data['settings']['application']['notification_urls']) else False, has_default_notification_urls=True if len(datastore.data['settings']['application']['notification_urls']) else False,
has_empty_checktime=using_default_check_time, has_empty_checktime=using_default_check_time,
has_extra_headers_file=watch.has_extra_headers_file or datastore.has_extra_headers_file,
is_html_webdriver=is_html_webdriver, is_html_webdriver=is_html_webdriver,
jq_support=jq_support, jq_support=jq_support,
playwright_enabled=os.getenv('PLAYWRIGHT_DRIVER_URL', False), playwright_enabled=os.getenv('PLAYWRIGHT_DRIVER_URL', False),
@@ -681,7 +727,7 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
@app.route("/settings", methods=['GET', "POST"]) @app.route("/settings", methods=['GET', "POST"])
@login_required @login_optionally_required
def settings_page(): def settings_page():
from changedetectionio import content_fetcher, forms from changedetectionio import content_fetcher, forms
@@ -761,9 +807,11 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
@app.route("/import", methods=['GET', "POST"]) @app.route("/import", methods=['GET', "POST"])
@login_required @login_optionally_required
def import_page(): def import_page():
remaining_urls = [] remaining_urls = []
from . import forms
if request.method == 'POST': if request.method == 'POST':
from .importer import import_url_list, import_distill_io_json from .importer import import_url_list, import_distill_io_json
@@ -771,9 +819,9 @@ def changedetection_app(config=None, datastore_o=None):
if request.values.get('urls') and len(request.values.get('urls').strip()): if request.values.get('urls') and len(request.values.get('urls').strip()):
# Import and push into the queue for immediate update check # Import and push into the queue for immediate update check
importer = import_url_list() importer = import_url_list()
importer.run(data=request.values.get('urls'), flash=flash, datastore=datastore) importer.run(data=request.values.get('urls'), flash=flash, datastore=datastore, processor=request.values.get('processor'))
for uuid in importer.new_uuids: for uuid in importer.new_uuids:
update_q.put((1, uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
if len(importer.remaining_data) == 0: if len(importer.remaining_data) == 0:
return redirect(url_for('index')) return redirect(url_for('index'))
@@ -786,12 +834,15 @@ def changedetection_app(config=None, datastore_o=None):
d_importer = import_distill_io_json() d_importer = import_distill_io_json()
d_importer.run(data=request.values.get('distill-io'), flash=flash, datastore=datastore) d_importer.run(data=request.values.get('distill-io'), flash=flash, datastore=datastore)
for uuid in d_importer.new_uuids: for uuid in d_importer.new_uuids:
update_q.put((1, uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
form = forms.importForm(formdata=request.form if request.method == 'POST' else None,
# data=default,
)
# Could be some remaining, or we could be on GET # Could be some remaining, or we could be on GET
output = render_template("import.html", output = render_template("import.html",
form=form,
import_url_list_remaining="\n".join(remaining_urls), import_url_list_remaining="\n".join(remaining_urls),
original_distill_json='' original_distill_json=''
) )
@@ -799,7 +850,7 @@ def changedetection_app(config=None, datastore_o=None):
# Clear all statuses, so we do not see the 'unviewed' class # Clear all statuses, so we do not see the 'unviewed' class
@app.route("/form/mark-all-viewed", methods=['GET']) @app.route("/form/mark-all-viewed", methods=['GET'])
@login_required @login_optionally_required
def mark_all_viewed(): def mark_all_viewed():
# Save the current newest history as the most recently viewed # Save the current newest history as the most recently viewed
@@ -809,7 +860,7 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/diff/<string:uuid>", methods=['GET', 'POST']) @app.route("/diff/<string:uuid>", methods=['GET', 'POST'])
@login_required @login_optionally_required
def diff_history_page(uuid): def diff_history_page(uuid):
from changedetectionio import forms from changedetectionio import forms
@@ -857,36 +908,35 @@ def changedetection_app(config=None, datastore_o=None):
# Save the current newest history as the most recently viewed # Save the current newest history as the most recently viewed
datastore.set_last_viewed(uuid, time.time()) datastore.set_last_viewed(uuid, time.time())
newest_file = history[dates[-1]]
# Read as binary and force decode as UTF-8 # Read as binary and force decode as UTF-8
# Windows may fail decode in python if we just use 'r' mode (chardet decode exception) # Windows may fail decode in python if we just use 'r' mode (chardet decode exception)
try: try:
with open(newest_file, 'r', encoding='utf-8', errors='ignore') as f: newest_version_file_contents = watch.get_history_snapshot(dates[-1])
newest_version_file_contents = f.read()
except Exception as e: except Exception as e:
newest_version_file_contents = "Unable to read {}.\n".format(newest_file) newest_version_file_contents = "Unable to read {}.\n".format(dates[-1])
previous_version = request.args.get('previous_version') previous_version = request.args.get('previous_version')
try: previous_timestamp = dates[-2]
previous_file = history[previous_version] if previous_version:
except KeyError: previous_timestamp = previous_version
# Not present, use a default value, the second one in the sorted list.
previous_file = history[dates[-2]]
try: try:
with open(previous_file, 'r', encoding='utf-8', errors='ignore') as f: previous_version_file_contents = watch.get_history_snapshot(previous_timestamp)
previous_version_file_contents = f.read()
except Exception as e: except Exception as e:
previous_version_file_contents = "Unable to read {}.\n".format(previous_file) previous_version_file_contents = "Unable to read {}.\n".format(previous_timestamp)
screenshot_url = watch.get_screenshot() screenshot_url = watch.get_screenshot()
system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver' system_uses_webdriver = datastore.data['settings']['application']['fetch_backend'] == 'html_webdriver'
is_html_webdriver = True if watch.get('fetch_backend') == 'html_webdriver' or ( is_html_webdriver = False
watch.get('fetch_backend', None) is None and system_uses_webdriver) else False if (watch.get('fetch_backend') == 'system' and system_uses_webdriver) or watch.get('fetch_backend') == 'html_webdriver':
is_html_webdriver = True
password_enabled_and_share_is_off = False
if datastore.data['settings']['application'].get('password') or os.getenv("SALTED_PASS", False):
password_enabled_and_share_is_off = not datastore.data['settings']['application'].get('shared_diff_access')
output = render_template("diff.html", output = render_template("diff.html",
current_diff_url=watch['url'], current_diff_url=watch['url'],
@@ -901,6 +951,7 @@ def changedetection_app(config=None, datastore_o=None):
left_sticky=True, left_sticky=True,
newest=newest_version_file_contents, newest=newest_version_file_contents,
newest_version_timestamp=dates[-1], newest_version_timestamp=dates[-1],
password_enabled_and_share_is_off=password_enabled_and_share_is_off,
previous=previous_version_file_contents, previous=previous_version_file_contents,
screenshot=screenshot_url, screenshot=screenshot_url,
uuid=uuid, uuid=uuid,
@@ -911,7 +962,7 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
@app.route("/preview/<string:uuid>", methods=['GET']) @app.route("/preview/<string:uuid>", methods=['GET'])
@login_required @login_optionally_required
def preview_page(uuid): def preview_page(uuid):
content = [] content = []
ignored_line_numbers = [] ignored_line_numbers = []
@@ -931,8 +982,9 @@ def changedetection_app(config=None, datastore_o=None):
extra_stylesheets = [url_for('static_content', group='styles', filename='diff.css')] extra_stylesheets = [url_for('static_content', group='styles', filename='diff.css')]
is_html_webdriver = True if watch.get('fetch_backend') == 'html_webdriver' or ( is_html_webdriver = False
watch.get('fetch_backend', None) is None and system_uses_webdriver) else False if (watch.get('fetch_backend') == 'system' and system_uses_webdriver) or watch.get('fetch_backend') == 'html_webdriver':
is_html_webdriver = True
# Never requested successfully, but we detected a fetch error # Never requested successfully, but we detected a fetch error
if datastore.data['watching'][uuid].history_n == 0 and (watch.get_error_text() or watch.get_error_snapshot()): if datastore.data['watching'][uuid].history_n == 0 and (watch.get_error_text() or watch.get_error_snapshot()):
@@ -951,37 +1003,35 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
timestamp = list(watch.history.keys())[-1] timestamp = list(watch.history.keys())[-1]
filename = watch.history[timestamp]
try: try:
with open(filename, 'r', encoding='utf-8', errors='ignore') as f: tmp = watch.get_history_snapshot(timestamp).splitlines()
tmp = f.readlines()
# Get what needs to be highlighted # Get what needs to be highlighted
ignore_rules = watch.get('ignore_text', []) + datastore.data['settings']['application']['global_ignore_text'] ignore_rules = watch.get('ignore_text', []) + datastore.data['settings']['application']['global_ignore_text']
# .readlines will keep the \n, but we will parse it here again, in the future tidy this up # .readlines will keep the \n, but we will parse it here again, in the future tidy this up
ignored_line_numbers = html_tools.strip_ignore_text(content="".join(tmp), ignored_line_numbers = html_tools.strip_ignore_text(content="\n".join(tmp),
wordlist=ignore_rules, wordlist=ignore_rules,
mode='line numbers' mode='line numbers'
) )
trigger_line_numbers = html_tools.strip_ignore_text(content="".join(tmp), trigger_line_numbers = html_tools.strip_ignore_text(content="\n".join(tmp),
wordlist=watch['trigger_text'], wordlist=watch['trigger_text'],
mode='line numbers' mode='line numbers'
) )
# Prepare the classes and lines used in the template # Prepare the classes and lines used in the template
i=0 i=0
for l in tmp: for l in tmp:
classes=[] classes=[]
i+=1 i+=1
if i in ignored_line_numbers: if i in ignored_line_numbers:
classes.append('ignored') classes.append('ignored')
if i in trigger_line_numbers: if i in trigger_line_numbers:
classes.append('triggered') classes.append('triggered')
content.append({'line': l, 'classes': ' '.join(classes)}) content.append({'line': l, 'classes': ' '.join(classes)})
except Exception as e: except Exception as e:
content.append({'line': "File doesnt exist or unable to read file {}".format(filename), 'classes': ''}) content.append({'line': f"File doesnt exist or unable to read timestamp {timestamp}", 'classes': ''})
output = render_template("preview.html", output = render_template("preview.html",
content=content, content=content,
@@ -1001,7 +1051,7 @@ def changedetection_app(config=None, datastore_o=None):
return output return output
@app.route("/settings/notification-logs", methods=['GET']) @app.route("/settings/notification-logs", methods=['GET'])
@login_required @login_optionally_required
def notification_logs(): def notification_logs():
global notification_debug_log global notification_debug_log
output = render_template("notification-log.html", output = render_template("notification-log.html",
@@ -1011,7 +1061,7 @@ def changedetection_app(config=None, datastore_o=None):
# We're good but backups are even better! # We're good but backups are even better!
@app.route("/backup", methods=['GET']) @app.route("/backup", methods=['GET'])
@login_required @login_optionally_required
def get_backup(): def get_backup():
import zipfile import zipfile
@@ -1023,7 +1073,8 @@ def changedetection_app(config=None, datastore_o=None):
os.unlink(previous_backup_filename) os.unlink(previous_backup_filename)
# create a ZipFile object # create a ZipFile object
backupname = "changedetection-backup-{}.zip".format(int(time.time())) timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
backupname = "changedetection-backup-{}.zip".format(timestamp)
backup_filepath = os.path.join(datastore_o.datastore_path, backupname) backup_filepath = os.path.join(datastore_o.datastore_path, backupname)
with zipfile.ZipFile(backup_filepath, "w", with zipfile.ZipFile(backup_filepath, "w",
@@ -1131,13 +1182,14 @@ def changedetection_app(config=None, datastore_o=None):
abort(404) abort(404)
@app.route("/form/add/quickwatch", methods=['POST']) @app.route("/form/add/quickwatch", methods=['POST'])
@login_required @login_optionally_required
def form_quick_watch_add(): def form_quick_watch_add():
from changedetectionio import forms from changedetectionio import forms
form = forms.quickWatchForm(request.form) form = forms.quickWatchForm(request.form)
if not form.validate(): if not form.validate():
flash("Error") for widget, l in form.errors.items():
flash(','.join(l), 'error')
return redirect(url_for('index')) return redirect(url_for('index'))
url = request.form.get('url').strip() url = request.form.get('url').strip()
@@ -1146,24 +1198,24 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('index')) return redirect(url_for('index'))
add_paused = request.form.get('edit_and_watch_submit_button') != None add_paused = request.form.get('edit_and_watch_submit_button') != None
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tag').strip(), extras={'paused': add_paused}) processor = request.form.get('processor', 'text_json_diff')
new_uuid = datastore.add_watch(url=url, tag=request.form.get('tag').strip(), extras={'paused': add_paused, 'processor': processor})
if new_uuid:
if not add_paused and new_uuid: if add_paused:
# Straight into the queue. flash('Watch added in Paused state, saving will unpause.')
update_q.put((1, new_uuid)) return redirect(url_for('edit_page', uuid=new_uuid, unpause_on_save=1))
flash("Watch added.") else:
# Straight into the queue.
if add_paused: update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': new_uuid}))
flash('Watch added in Paused state, saving will unpause.') flash("Watch added.")
return redirect(url_for('edit_page', uuid=new_uuid, unpause_on_save=1))
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/api/delete", methods=['GET']) @app.route("/api/delete", methods=['GET'])
@login_required @login_optionally_required
def form_delete(): def form_delete():
uuid = request.args.get('uuid') uuid = request.args.get('uuid')
@@ -1180,7 +1232,7 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/api/clone", methods=['GET']) @app.route("/api/clone", methods=['GET'])
@login_required @login_optionally_required
def form_clone(): def form_clone():
uuid = request.args.get('uuid') uuid = request.args.get('uuid')
# More for testing, possible to return the first/only # More for testing, possible to return the first/only
@@ -1188,15 +1240,17 @@ def changedetection_app(config=None, datastore_o=None):
uuid = list(datastore.data['watching'].keys()).pop() uuid = list(datastore.data['watching'].keys()).pop()
new_uuid = datastore.clone(uuid) new_uuid = datastore.clone(uuid)
update_q.put((5, new_uuid)) if new_uuid:
flash('Cloned.') if not datastore.data['watching'].get(uuid).get('paused'):
update_q.put(queuedWatchMetaData.PrioritizedItem(priority=5, item={'uuid': new_uuid, 'skip_when_checksum_same': True}))
flash('Cloned.')
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/api/checknow", methods=['GET']) @app.route("/api/checknow", methods=['GET'])
@login_required @login_optionally_required
def form_watch_checknow(): def form_watch_checknow():
# Forced recheck will skip the 'skip if content is the same' rule (, 'reprocess_existing_data': True})))
tag = request.args.get('tag') tag = request.args.get('tag')
uuid = request.args.get('uuid') uuid = request.args.get('uuid')
i = 0 i = 0
@@ -1205,11 +1259,9 @@ def changedetection_app(config=None, datastore_o=None):
for t in running_update_threads: for t in running_update_threads:
running_uuids.append(t.current_uuid) running_uuids.append(t.current_uuid)
# @todo check thread is running and skip
if uuid: if uuid:
if uuid not in running_uuids: if uuid not in running_uuids:
update_q.put((1, uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': False}))
i = 1 i = 1
elif tag != None: elif tag != None:
@@ -1217,20 +1269,20 @@ def changedetection_app(config=None, datastore_o=None):
for watch_uuid, watch in datastore.data['watching'].items(): for watch_uuid, watch in datastore.data['watching'].items():
if (tag != None and tag in watch['tag']): if (tag != None and tag in watch['tag']):
if watch_uuid not in running_uuids and not datastore.data['watching'][watch_uuid]['paused']: if watch_uuid not in running_uuids and not datastore.data['watching'][watch_uuid]['paused']:
update_q.put((1, watch_uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': watch_uuid, 'skip_when_checksum_same': False}))
i += 1 i += 1
else: else:
# No tag, no uuid, add everything. # No tag, no uuid, add everything.
for watch_uuid, watch in datastore.data['watching'].items(): for watch_uuid, watch in datastore.data['watching'].items():
if watch_uuid not in running_uuids and not datastore.data['watching'][watch_uuid]['paused']: if watch_uuid not in running_uuids and not datastore.data['watching'][watch_uuid]['paused']:
update_q.put((1, watch_uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': watch_uuid, 'skip_when_checksum_same': False}))
i += 1 i += 1
flash("{} watches are queued for rechecking.".format(i)) flash("{} watches queued for rechecking.".format(i))
return redirect(url_for('index', tag=tag)) return redirect(url_for('index', tag=tag))
@app.route("/form/checkbox-operations", methods=['POST']) @app.route("/form/checkbox-operations", methods=['POST'])
@login_required @login_optionally_required
def form_watch_list_checkbox_operations(): def form_watch_list_checkbox_operations():
op = request.form['op'] op = request.form['op']
uuids = request.form.getlist('uuids') uuids = request.form.getlist('uuids')
@@ -1247,7 +1299,6 @@ def changedetection_app(config=None, datastore_o=None):
uuid = uuid.strip() uuid = uuid.strip()
if datastore.data['watching'].get(uuid): if datastore.data['watching'].get(uuid):
datastore.data['watching'][uuid.strip()]['paused'] = True datastore.data['watching'][uuid.strip()]['paused'] = True
flash("{} watches paused".format(len(uuids))) flash("{} watches paused".format(len(uuids)))
elif (op == 'unpause'): elif (op == 'unpause'):
@@ -1257,6 +1308,13 @@ def changedetection_app(config=None, datastore_o=None):
datastore.data['watching'][uuid.strip()]['paused'] = False datastore.data['watching'][uuid.strip()]['paused'] = False
flash("{} watches unpaused".format(len(uuids))) flash("{} watches unpaused".format(len(uuids)))
elif (op == 'mark-viewed'):
for uuid in uuids:
uuid = uuid.strip()
if datastore.data['watching'].get(uuid):
datastore.set_last_viewed(uuid, int(time.time()))
flash("{} watches updated".format(len(uuids)))
elif (op == 'mute'): elif (op == 'mute'):
for uuid in uuids: for uuid in uuids:
uuid = uuid.strip() uuid = uuid.strip()
@@ -1271,6 +1329,21 @@ def changedetection_app(config=None, datastore_o=None):
datastore.data['watching'][uuid.strip()]['notification_muted'] = False datastore.data['watching'][uuid.strip()]['notification_muted'] = False
flash("{} watches un-muted".format(len(uuids))) flash("{} watches un-muted".format(len(uuids)))
elif (op == 'recheck'):
for uuid in uuids:
uuid = uuid.strip()
if datastore.data['watching'].get(uuid):
# Recheck and require a full reprocessing
update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': False}))
flash("{} watches queued for rechecking".format(len(uuids)))
elif (op == 'clear-history'):
for uuid in uuids:
uuid = uuid.strip()
if datastore.data['watching'].get(uuid):
datastore.clear_watch_history(uuid)
flash("{} watches cleared/reset.".format(len(uuids)))
elif (op == 'notification-default'): elif (op == 'notification-default'):
from changedetectionio.notification import ( from changedetectionio.notification import (
default_notification_format_for_watch default_notification_format_for_watch
@@ -1287,7 +1360,7 @@ def changedetection_app(config=None, datastore_o=None):
return redirect(url_for('index')) return redirect(url_for('index'))
@app.route("/api/share-url", methods=['GET']) @app.route("/api/share-url", methods=['GET'])
@login_required @login_optionally_required
def form_share_put_watch(): def form_share_put_watch():
"""Given a watch UUID, upload the info and return a share-link """Given a watch UUID, upload the info and return a share-link
the share-link can be imported/added""" the share-link can be imported/added"""
@@ -1343,6 +1416,10 @@ def changedetection_app(config=None, datastore_o=None):
import changedetectionio.blueprint.browser_steps as browser_steps import changedetectionio.blueprint.browser_steps as browser_steps
app.register_blueprint(browser_steps.construct_blueprint(datastore), url_prefix='/browser-steps') app.register_blueprint(browser_steps.construct_blueprint(datastore), url_prefix='/browser-steps')
import changedetectionio.blueprint.price_data_follower as price_data_follower
app.register_blueprint(price_data_follower.construct_blueprint(datastore, update_q), url_prefix='/price_data_follower')
# @todo handle ctrl break # @todo handle ctrl break
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start() ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
threading.Thread(target=notification_runner).start() threading.Thread(target=notification_runner).start()
@@ -1381,6 +1458,7 @@ def check_for_new_version():
# Check daily # Check daily
app.config.exit.wait(86400) app.config.exit.wait(86400)
def notification_runner(): def notification_runner():
global notification_debug_log global notification_debug_log
from datetime import datetime from datetime import datetime
@@ -1448,7 +1526,11 @@ def ticker_thread_check_time_launch_checks():
watch_uuid_list = [] watch_uuid_list = []
while True: while True:
try: try:
watch_uuid_list = datastore.data['watching'].keys() # Get a list of watches sorted by last_checked, [1] because it gets passed a tuple
# This is so we examine the most over-due first
for k in sorted(datastore.data['watching'].items(), key=lambda item: item[1].get('last_checked',0)):
watch_uuid_list.append(k[0])
except RuntimeError as e: except RuntimeError as e:
# RuntimeError: dictionary changed size during iteration # RuntimeError: dictionary changed size during iteration
time.sleep(0.1) time.sleep(0.1)
@@ -1488,7 +1570,7 @@ def ticker_thread_check_time_launch_checks():
seconds_since_last_recheck = now - watch['last_checked'] seconds_since_last_recheck = now - watch['last_checked']
if seconds_since_last_recheck >= (threshold + watch.jitter_seconds) and seconds_since_last_recheck >= recheck_time_minimum_seconds: if seconds_since_last_recheck >= (threshold + watch.jitter_seconds) and seconds_since_last_recheck >= recheck_time_minimum_seconds:
if not uuid in running_uuids and uuid not in [q_uuid for p,q_uuid in update_q.queue]: if not uuid in running_uuids and uuid not in [q_uuid.item['uuid'] for q_uuid in update_q.queue]:
# Proxies can be set to have a limit on seconds between which they can be called # Proxies can be set to have a limit on seconds between which they can be called
watch_proxy = datastore.get_preferred_proxy_for_watch(uuid=uuid) watch_proxy = datastore.get_preferred_proxy_for_watch(uuid=uuid)
@@ -1519,8 +1601,9 @@ def ticker_thread_check_time_launch_checks():
priority, priority,
watch.jitter_seconds, watch.jitter_seconds,
now - watch['last_checked'])) now - watch['last_checked']))
# Into the queue with you # Into the queue with you
update_q.put((priority, uuid)) update_q.put(queuedWatchMetaData.PrioritizedItem(priority=priority, item={'uuid': uuid, 'skip_when_checksum_same': True}))
# Reset for next time # Reset for next time
watch.jitter_seconds = 0 watch.jitter_seconds = 0

View File

@@ -0,0 +1,117 @@
# Responsible for building the storage dict into a set of rules ("JSON Schema") acceptable via the API
# Probably other ways to solve this when the backend switches to some ORM
def build_time_between_check_json_schema():
# Setup time between check schema
schema_properties_time_between_check = {
"type": "object",
"additionalProperties": False,
"properties": {}
}
for p in ['weeks', 'days', 'hours', 'minutes', 'seconds']:
schema_properties_time_between_check['properties'][p] = {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
}
return schema_properties_time_between_check
def build_watch_json_schema(d):
# Base JSON schema
schema = {
'type': 'object',
'properties': {},
}
for k, v in d.items():
# @todo 'integer' is not covered here because its almost always for internal usage
if isinstance(v, type(None)):
schema['properties'][k] = {
"anyOf": [
{"type": "null"},
]
}
elif isinstance(v, list):
schema['properties'][k] = {
"anyOf": [
{"type": "array",
# Always is an array of strings, like text or regex or something
"items": {
"type": "string",
"maxLength": 5000
}
},
]
}
elif isinstance(v, bool):
schema['properties'][k] = {
"anyOf": [
{"type": "boolean"},
]
}
elif isinstance(v, str):
schema['properties'][k] = {
"anyOf": [
{"type": "string",
"maxLength": 5000},
]
}
# Can also be a string (or None by default above)
for v in ['body',
'notification_body',
'notification_format',
'notification_title',
'proxy',
'tag',
'title',
'webdriver_js_execute_code'
]:
schema['properties'][v]['anyOf'].append({'type': 'string', "maxLength": 5000})
# None or Boolean
schema['properties']['track_ldjson_price_data']['anyOf'].append({'type': 'boolean'})
schema['properties']['method'] = {"type": "string",
"enum": ["GET", "POST", "DELETE", "PUT"]
}
schema['properties']['fetch_backend']['anyOf'].append({"type": "string",
"enum": ["html_requests", "html_webdriver"]
})
# All headers must be key/value type dict
schema['properties']['headers'] = {
"type": "object",
"patternProperties": {
# Should always be a string:string type value
".*": {"type": "string"},
}
}
from changedetectionio.notification import valid_notification_formats
schema['properties']['notification_format'] = {'type': 'string',
'enum': list(valid_notification_formats.keys())
}
# Stuff that shouldn't be available but is just state-storage
for v in ['previous_md5', 'last_error', 'has_ldjson_price_data', 'previous_md5_before_filters', 'uuid']:
del schema['properties'][v]
schema['properties']['webdriver_delay']['anyOf'].append({'type': 'integer'})
schema['properties']['time_between_check'] = build_time_between_check_json_schema()
# headers ?
return schema

View File

@@ -1,11 +1,24 @@
from flask_expects_json import expects_json
from changedetectionio import queuedWatchMetaData
from flask_restful import abort, Resource from flask_restful import abort, Resource
from flask import request, make_response from flask import request, make_response
import validators import validators
from . import auth from . import auth
import copy
# See docs/README.md for rebuilding the docs/apidoc information
from . import api_schema
# https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html # Build a JSON Schema atleast partially based on our Watch model
from changedetectionio.model.Watch import base_config as watch_base_config
schema = api_schema.build_watch_json_schema(watch_base_config)
schema_create_watch = copy.deepcopy(schema)
schema_create_watch['required'] = ['url']
schema_update_watch = copy.deepcopy(schema)
schema_update_watch['additionalProperties'] = False
class Watch(Resource): class Watch(Resource):
def __init__(self, **kwargs): def __init__(self, **kwargs):
@@ -15,30 +28,100 @@ class Watch(Resource):
# Get information about a single watch, excluding the history list (can be large) # Get information about a single watch, excluding the history list (can be large)
# curl http://localhost:4000/api/v1/watch/<string:uuid> # curl http://localhost:4000/api/v1/watch/<string:uuid>
# @todo - version2 - ?muted and ?paused should be able to be called together, return the watch struct not "OK"
# ?recheck=true # ?recheck=true
@auth.check_token @auth.check_token
def get(self, uuid): def get(self, uuid):
"""
@api {get} /api/v1/watch/:uuid Get a single watch data
@apiDescription Retrieve watch information and set muted/paused status
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?muted=unmuted" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?paused=unpaused" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiName Watch
@apiGroup Watch
@apiParam {uuid} uuid Watch unique ID.
@apiQuery {Boolean} [recheck] Recheck this watch `recheck=1`
@apiQuery {String} [paused] =`paused` or =`unpaused` , Sets the PAUSED state
@apiQuery {String} [muted] =`muted` or =`unmuted` , Sets the MUTE NOTIFICATIONS state
@apiSuccess (200) {String} OK When paused/muted/recheck operation OR full JSON object of the watch
@apiSuccess (200) {JSON} WatchJSON JSON Full JSON object of the watch
"""
from copy import deepcopy from copy import deepcopy
watch = deepcopy(self.datastore.data['watching'].get(uuid)) watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch: if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid)) abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.args.get('recheck'): if request.args.get('recheck'):
self.update_q.put((1, uuid)) self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
return "OK", 200
if request.args.get('paused', '') == 'paused':
self.datastore.data['watching'].get(uuid).pause()
return "OK", 200
elif request.args.get('paused', '') == 'unpaused':
self.datastore.data['watching'].get(uuid).unpause()
return "OK", 200
if request.args.get('muted', '') == 'muted':
self.datastore.data['watching'].get(uuid).mute()
return "OK", 200
elif request.args.get('muted', '') == 'unmuted':
self.datastore.data['watching'].get(uuid).unmute()
return "OK", 200 return "OK", 200
# Return without history, get that via another API call # Return without history, get that via another API call
# Properties are not returned as a JSON, so add the required props manually
watch['history_n'] = watch.history_n watch['history_n'] = watch.history_n
watch['last_changed'] = watch.last_changed
return watch return watch
@auth.check_token @auth.check_token
def delete(self, uuid): def delete(self, uuid):
"""
@api {delete} /api/v1/watch/:uuid Delete a watch and related history
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X DELETE -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiParam {uuid} uuid Watch unique ID.
@apiName Delete
@apiGroup Watch
@apiSuccess (200) {String} OK Was deleted
"""
if not self.datastore.data['watching'].get(uuid): if not self.datastore.data['watching'].get(uuid):
abort(400, message='No watch exists with the UUID of {}'.format(uuid)) abort(400, message='No watch exists with the UUID of {}'.format(uuid))
self.datastore.delete(uuid) self.datastore.delete(uuid)
return 'OK', 204 return 'OK', 204
@auth.check_token
@expects_json(schema_update_watch)
def put(self, uuid):
"""
@api {put} /api/v1/watch/:uuid Update watch information
@apiExample {curl} Example usage:
Update (PUT)
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X PUT -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "new list"}'
@apiDescription Updates an existing watch using JSON, accepts the same structure as returned in <a href="#api-Watch-Watch">get single watch information</a>
@apiParam {uuid} uuid Watch unique ID.
@apiName Update a watch
@apiGroup Watch
@apiSuccess (200) {String} OK Was updated
@apiSuccess (500) {String} ERR Some other error
"""
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.json.get('proxy'):
plist = self.datastore.proxy_list
if not request.json.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
watch.update(request.json)
return "OK", 200
class WatchHistory(Resource): class WatchHistory(Resource):
def __init__(self, **kwargs): def __init__(self, **kwargs):
@@ -48,6 +131,21 @@ class WatchHistory(Resource):
# Get a list of available history for a watch by UUID # Get a list of available history for a watch by UUID
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history # curl http://localhost:4000/api/v1/watch/<string:uuid>/history
def get(self, uuid): def get(self, uuid):
"""
@api {get} /api/v1/watch/<string:uuid>/history Get a list of all historical snapshots available for a watch
@apiDescription Requires `uuid`, returns list
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091/history -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json"
{
"1676649279": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/cb7e9be8258368262246910e6a2a4c30.txt",
"1677092785": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/e20db368d6fc633e34f559ff67bb4044.txt",
"1677103794": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/02efdd37dacdae96554a8cc85dc9c945.txt"
}
@apiName Get list of available stored snapshots for watch
@apiGroup Watch History
@apiSuccess (200) {String} OK
@apiSuccess (404) {String} ERR Not found
"""
watch = self.datastore.data['watching'].get(uuid) watch = self.datastore.data['watching'].get(uuid)
if not watch: if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid)) abort(404, message='No watch exists with the UUID of {}'.format(uuid))
@@ -59,11 +157,18 @@ class WatchSingleHistory(Resource):
# datastore is a black box dependency # datastore is a black box dependency
self.datastore = kwargs['datastore'] self.datastore = kwargs['datastore']
# Read a given history snapshot and return its content
# <string:timestamp> or "latest"
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history/<int:timestamp>
@auth.check_token @auth.check_token
def get(self, uuid, timestamp): def get(self, uuid, timestamp):
"""
@api {get} /api/v1/watch/<string:uuid>/history/<int:timestamp> Get single snapshot from watch
@apiDescription Requires watch `uuid` and `timestamp`. `timestamp` of "`latest`" for latest available snapshot, or <a href="#api-Watch_History-Get_list_of_available_stored_snapshots_for_watch">use the list returned here</a>
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091/history/1677092977 -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json"
@apiName Get single snapshot content
@apiGroup Watch History
@apiSuccess (200) {String} OK
@apiSuccess (404) {String} ERR Not found
"""
watch = self.datastore.data['watching'].get(uuid) watch = self.datastore.data['watching'].get(uuid)
if not watch: if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid)) abort(404, message='No watch exists with the UUID of {}'.format(uuid))
@@ -74,8 +179,7 @@ class WatchSingleHistory(Resource):
if timestamp == 'latest': if timestamp == 'latest':
timestamp = list(watch.history.keys())[-1] timestamp = list(watch.history.keys())[-1]
with open(watch.history[timestamp], 'r') as f: content = watch.get_history_snapshot(timestamp)
content = f.read()
response = make_response(content, 200) response = make_response(content, 200)
response.mimetype = "text/plain" response.mimetype = "text/plain"
@@ -89,36 +193,87 @@ class CreateWatch(Resource):
self.update_q = kwargs['update_q'] self.update_q = kwargs['update_q']
@auth.check_token @auth.check_token
@expects_json(schema_create_watch)
def post(self): def post(self):
# curl http://localhost:4000/api/v1/watch -H "Content-Type: application/json" -d '{"url": "https://my-nice.com", "tag": "one, two" }' """
@api {post} /api/v1/watch Create a single watch
@apiDescription Requires atleast `url` set, can accept the same structure as <a href="#api-Watch-Watch">get single watch information</a> to create.
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "nice list"}'
@apiName Create
@apiGroup Watch
@apiSuccess (200) {String} OK Was created
@apiSuccess (500) {String} ERR Some other error
"""
json_data = request.get_json() json_data = request.get_json()
tag = json_data['tag'].strip() if json_data.get('tag') else '' url = json_data['url'].strip()
if not validators.url(json_data['url'].strip()): if not validators.url(json_data['url'].strip()):
return "Invalid or unsupported URL", 400 return "Invalid or unsupported URL", 400
extras = {'title': json_data['title'].strip()} if json_data.get('title') else {} if json_data.get('proxy'):
plist = self.datastore.proxy_list
if not json_data.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
new_uuid = self.datastore.add_watch(url=json_data['url'].strip(), tag=tag, extras=extras) extras = copy.deepcopy(json_data)
self.update_q.put((1, new_uuid)) del extras['url']
return {'uuid': new_uuid}, 201
new_uuid = self.datastore.add_watch(url=url, extras=extras)
if new_uuid:
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': new_uuid, 'skip_when_checksum_same': True}))
return {'uuid': new_uuid}, 201
else:
return "Invalid or unsupported URL", 400
# Return concise list of available watches and some very basic info
# curl http://localhost:4000/api/v1/watch|python -mjson.tool
# ?recheck_all=1 to recheck all
@auth.check_token @auth.check_token
def get(self): def get(self):
"""
@api {get} /api/v1/watch List watches
@apiDescription Return concise list of available watches and some very basic info
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45"
{
"6a4b7d5c-fee4-4616-9f43-4ac97046b595": {
"last_changed": 1677103794,
"last_checked": 1677103794,
"last_error": false,
"title": "",
"url": "http://www.quotationspage.com/random.php"
},
"e6f5fd5c-dbfe-468b-b8f3-f9d6ff5ad69b": {
"last_changed": 0,
"last_checked": 1676662819,
"last_error": false,
"title": "QuickLook",
"url": "https://github.com/QL-Win/QuickLook/tags"
}
}
@apiParam {String} [recheck_all] Optional Set to =1 to force recheck of all watches
@apiParam {String} [tag] Optional name of tag to limit results
@apiName ListWatches
@apiGroup Watch Management
@apiSuccess (200) {String} OK JSON dict
"""
list = {} list = {}
for k, v in self.datastore.data['watching'].items():
list[k] = {'url': v['url'], tag_limit = request.args.get('tag', None)
'title': v['title'], for k, watch in self.datastore.data['watching'].items():
'last_checked': v['last_checked'], if tag_limit:
'last_changed': v.last_changed, if not tag_limit.lower() in watch.all_tags:
'last_error': v['last_error']} continue
list[k] = {'url': watch['url'],
'title': watch['title'],
'last_checked': watch['last_checked'],
'last_changed': watch.last_changed,
'last_error': watch['last_error']}
if request.args.get('recheck_all'): if request.args.get('recheck_all'):
for uuid in self.datastore.data['watching'].keys(): for uuid in self.datastore.data['watching'].keys():
self.update_q.put((1, uuid)) self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
return {'status': "OK"}, 200 return {'status': "OK"}, 200
return list, 200 return list, 200
@@ -131,6 +286,22 @@ class SystemInfo(Resource):
@auth.check_token @auth.check_token
def get(self): def get(self):
"""
@api {get} /api/v1/systeminfo Return system info
@apiDescription Return some info about the current system state
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/systeminfo -H"x-api-key:813031b16330fe25e3780cf0325daa45"
HTTP/1.0 200
{
'queue_size': 10 ,
'overdue_watches': ["watch-uuid-list"],
'uptime': 38344.55,
'watch_count': 800,
'version': "0.40.1"
}
@apiName Get Info
@apiGroup System Information
"""
import time import time
overdue_watches = [] overdue_watches = []
@@ -149,10 +320,11 @@ class SystemInfo(Resource):
# Allow 5 minutes of grace time before we decide it's overdue # Allow 5 minutes of grace time before we decide it's overdue
if time_since_check - (5 * 60) > t: if time_since_check - (5 * 60) > t:
overdue_watches.append(uuid) overdue_watches.append(uuid)
from changedetectionio import __version__ as main_version
return { return {
'queue_size': self.update_q.qsize(), 'queue_size': self.update_q.qsize(),
'overdue_watches': overdue_watches, 'overdue_watches': overdue_watches,
'uptime': round(time.time() - self.datastore.start_time, 2), 'uptime': round(time.time() - self.datastore.start_time, 2),
'watch_count': len(self.datastore.data.get('watching', {})) 'watch_count': len(self.datastore.data.get('watching', {})),
'version': main_version
}, 200 }, 200

View File

@@ -23,63 +23,110 @@
from distutils.util import strtobool from distutils.util import strtobool
from flask import Blueprint, request, make_response from flask import Blueprint, request, make_response
from flask_login import login_required
import os import os
import logging import logging
from changedetectionio.store import ChangeDetectionStore from changedetectionio.store import ChangeDetectionStore
from changedetectionio import login_optionally_required
browsersteps_live_ui_o = {} browsersteps_sessions = {}
browsersteps_playwright_browser_interface = None io_interface_context = None
browsersteps_playwright_browser_interface_browser = None
browsersteps_playwright_browser_interface_context = None
browsersteps_playwright_browser_interface_end_time = None
browsersteps_playwright_browser_interface_start_time = None
def cleanup_playwright_session():
global browsersteps_live_ui_o
global browsersteps_playwright_browser_interface
global browsersteps_playwright_browser_interface_browser
global browsersteps_playwright_browser_interface_context
global browsersteps_playwright_browser_interface_end_time
global browsersteps_playwright_browser_interface_start_time
browsersteps_live_ui_o = {}
browsersteps_playwright_browser_interface = None
browsersteps_playwright_browser_interface_browser = None
browsersteps_playwright_browser_interface_end_time = None
browsersteps_playwright_browser_interface_start_time = None
print("Cleaning up old playwright session because time was up, calling .goodbye()")
try:
browsersteps_playwright_browser_interface_context.goodbye()
except Exception as e:
print ("Got exception in shutdown, probably OK")
print (str(e))
browsersteps_playwright_browser_interface_context = None
print ("Cleaning up old playwright session because time was up - done")
def construct_blueprint(datastore: ChangeDetectionStore): def construct_blueprint(datastore: ChangeDetectionStore):
browser_steps_blueprint = Blueprint('browser_steps', __name__, template_folder="templates") browser_steps_blueprint = Blueprint('browser_steps', __name__, template_folder="templates")
@login_required def start_browsersteps_session(watch_uuid):
@browser_steps_blueprint.route("/browsersteps_update", methods=['GET', 'POST']) from . import nonContext
from . import browser_steps
import time
global browsersteps_sessions
global io_interface_context
# We keep the playwright session open for many minutes
seconds_keepalive = int(os.getenv('BROWSERSTEPS_MINUTES_KEEPALIVE', 10)) * 60
browsersteps_start_session = {'start_time': time.time()}
# You can only have one of these running
# This should be very fine to leave running for the life of the application
# @idea - Make it global so the pool of watch fetchers can use it also
if not io_interface_context:
io_interface_context = nonContext.c_sync_playwright()
# Start the Playwright context, which is actually a nodejs sub-process and communicates over STDIN/STDOUT pipes
io_interface_context = io_interface_context.start()
# keep it alive for 10 seconds more than we advertise, sometimes it helps to keep it shutting down cleanly
keepalive = "&timeout={}".format(((seconds_keepalive + 3) * 1000))
try:
browsersteps_start_session['browser'] = io_interface_context.chromium.connect_over_cdp(
os.getenv('PLAYWRIGHT_DRIVER_URL', '') + keepalive)
except Exception as e:
if 'ECONNREFUSED' in str(e):
return make_response('Unable to start the Playwright Browser session, is it running?', 401)
else:
return make_response(str(e), 401)
proxy_id = datastore.get_preferred_proxy_for_watch(uuid=watch_uuid)
proxy = None
if proxy_id:
proxy_url = datastore.proxy_list.get(proxy_id).get('url')
if proxy_url:
# Playwright needs separate username and password values
from urllib.parse import urlparse
parsed = urlparse(proxy_url)
proxy = {'server': proxy_url}
if parsed.username:
proxy['username'] = parsed.username
if parsed.password:
proxy['password'] = parsed.password
print("Browser Steps: UUID {} selected proxy {}".format(watch_uuid, proxy_url))
# Tell Playwright to connect to Chrome and setup a new session via our stepper interface
browsersteps_start_session['browserstepper'] = browser_steps.browsersteps_live_ui(
playwright_browser=browsersteps_start_session['browser'],
proxy=proxy)
# For test
#browsersteps_start_session['browserstepper'].action_goto_url(value="http://example.com?time="+str(time.time()))
return browsersteps_start_session
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_start_session", methods=['GET'])
def browsersteps_start_session():
# A new session was requested, return sessionID
import uuid
global browsersteps_sessions
browsersteps_session_id = str(uuid.uuid4())
watch_uuid = request.args.get('uuid')
if not watch_uuid:
return make_response('No Watch UUID specified', 500)
print("Starting connection with playwright")
logging.debug("browser_steps.py connecting")
browsersteps_sessions[browsersteps_session_id] = start_browsersteps_session(watch_uuid)
print("Starting connection with playwright - done")
return {'browsersteps_session_id': browsersteps_session_id}
# A request for an action was received
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_update", methods=['POST'])
def browsersteps_ui_update(): def browsersteps_ui_update():
import base64 import base64
import playwright._impl._api_types import playwright._impl._api_types
import time global browsersteps_sessions
from changedetectionio.blueprint.browser_steps import browser_steps from changedetectionio.blueprint.browser_steps import browser_steps
global browsersteps_live_ui_o, browsersteps_playwright_browser_interface_end_time
global browsersteps_playwright_browser_interface_browser
global browsersteps_playwright_browser_interface
global browsersteps_playwright_browser_interface_start_time
step_n = None
remaining =0 remaining =0
uuid = request.args.get('uuid') uuid = request.args.get('uuid')
@@ -88,13 +135,9 @@ def construct_blueprint(datastore: ChangeDetectionStore):
if not browsersteps_session_id: if not browsersteps_session_id:
return make_response('No browsersteps_session_id specified', 500) return make_response('No browsersteps_session_id specified', 500)
# Because we don't "really" run in a context manager ( we make the playwright interface global/long-living ) if not browsersteps_sessions.get(browsersteps_session_id):
# We need to manage the shutdown when the time is up return make_response('No session exists under that ID', 500)
if browsersteps_playwright_browser_interface_end_time:
remaining = browsersteps_playwright_browser_interface_end_time-time.time()
if browsersteps_playwright_browser_interface_end_time and remaining <= 0:
cleanup_playwright_session()
return make_response('Browser session expired, please reload the Browser Steps interface', 401)
# Actions - step/apply/etc, do the thing and return state # Actions - step/apply/etc, do the thing and return state
if request.method == 'POST': if request.method == 'POST':
@@ -107,18 +150,13 @@ def construct_blueprint(datastore: ChangeDetectionStore):
if step_operation == 'Goto site': if step_operation == 'Goto site':
step_operation = 'goto_url' step_operation = 'goto_url'
step_optional_value = None step_optional_value = datastore.data['watching'][uuid].get('url')
step_selector = datastore.data['watching'][uuid].get('url') step_selector = None
# @todo try.. accept.. nice errors not popups.. # @todo try.. accept.. nice errors not popups..
try: try:
this_session = browsersteps_live_ui_o.get(browsersteps_session_id) browsersteps_sessions[browsersteps_session_id]['browserstepper'].call_action(action_name=step_operation,
if not this_session:
print("Browser exited")
return make_response('Browser session ran out of time :( Please reload this page.', 401)
this_session.call_action(action_name=step_operation,
selector=step_selector, selector=step_selector,
optional_value=step_optional_value) optional_value=step_optional_value)
@@ -130,99 +168,43 @@ def construct_blueprint(datastore: ChangeDetectionStore):
# Get visual selector ready/update its data (also use the current filter info from the page?) # Get visual selector ready/update its data (also use the current filter info from the page?)
# When the last 'apply' button was pressed # When the last 'apply' button was pressed
# @todo this adds overhead because the xpath selection is happening twice # @todo this adds overhead because the xpath selection is happening twice
u = this_session.page.url u = browsersteps_sessions[browsersteps_session_id]['browserstepper'].page.url
if is_last_step and u: if is_last_step and u:
(screenshot, xpath_data) = this_session.request_visualselector_data() (screenshot, xpath_data) = browsersteps_sessions[browsersteps_session_id]['browserstepper'].request_visualselector_data()
datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot) datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot)
datastore.save_xpath_data(watch_uuid=uuid, data=xpath_data) datastore.save_xpath_data(watch_uuid=uuid, data=xpath_data)
# Setup interface # if not this_session.page:
if request.method == 'GET': # cleanup_playwright_session()
# return make_response('Browser session ran out of time :( Please reload this page.', 401)
if not browsersteps_playwright_browser_interface: # Screenshots and other info only needed on requesting a step (POST)
print("Starting connection with playwright") try:
logging.debug("browser_steps.py connecting") state = browsersteps_sessions[browsersteps_session_id]['browserstepper'].get_current_state()
except playwright._impl._api_types.Error as e:
return make_response("Browser session ran out of time :( Please reload this page."+str(e), 401)
global browsersteps_playwright_browser_interface_context # Use send_file() which is way faster than read/write loop on bytes
from . import nonContext import json
browsersteps_playwright_browser_interface_context = nonContext.c_sync_playwright() from tempfile import mkstemp
browsersteps_playwright_browser_interface = browsersteps_playwright_browser_interface_context.start() from flask import send_file
tmp_fd, tmp_file = mkstemp(text=True, suffix=".json", prefix="changedetectionio-")
time.sleep(1) output = json.dumps({'screenshot': "data:image/jpeg;base64,{}".format(
# At 20 minutes, some other variable is closing it base64.b64encode(state[0]).decode('ascii')),
# @todo find out what it is and set it 'xpath_data': state[1],
seconds_keepalive = int(os.getenv('BROWSERSTEPS_MINUTES_KEEPALIVE', 10)) * 60 'session_age_start': browsersteps_sessions[browsersteps_session_id]['browserstepper'].age_start,
'browser_time_remaining': round(remaining)
})
# keep it alive for 10 seconds more than we advertise, sometimes it helps to keep it shutting down cleanly with os.fdopen(tmp_fd, 'w') as f:
keepalive = "&timeout={}".format(((seconds_keepalive+3) * 1000)) f.write(output)
try:
browsersteps_playwright_browser_interface_browser = browsersteps_playwright_browser_interface.chromium.connect_over_cdp(
os.getenv('PLAYWRIGHT_DRIVER_URL', '') + keepalive)
except Exception as e:
if 'ECONNREFUSED' in str(e):
return make_response('Unable to start the Playwright session properly, is it running?', 401)
browsersteps_playwright_browser_interface_end_time = time.time() + (seconds_keepalive-3) response = make_response(send_file(path_or_file=tmp_file,
print("Starting connection with playwright - done") mimetype='application/json; charset=UTF-8',
etag=True))
if not browsersteps_live_ui_o.get(browsersteps_session_id): # No longer needed
# Boot up a new session os.unlink(tmp_file)
proxy_id = datastore.get_preferred_proxy_for_watch(uuid=uuid)
proxy = None
if proxy_id:
proxy_url = datastore.proxy_list.get(proxy_id).get('url')
if proxy_url:
proxy = {'server': proxy_url}
print("Browser Steps: UUID {} Using proxy {}".format(uuid, proxy_url))
# Begin the new "Playwright Context" that re-uses the playwright interface
# Each session is a "Playwright Context" as a list, that uses the playwright interface
browsersteps_live_ui_o[browsersteps_session_id] = browser_steps.browsersteps_live_ui(
playwright_browser=browsersteps_playwright_browser_interface_browser,
proxy=proxy)
this_session = browsersteps_live_ui_o[browsersteps_session_id]
if not this_session.page:
cleanup_playwright_session()
return make_response('Browser session ran out of time :( Please reload this page.', 401)
response = None
if request.method == 'POST':
# Screenshots and other info only needed on requesting a step (POST)
try:
state = this_session.get_current_state()
except playwright._impl._api_types.Error as e:
return make_response("Browser session ran out of time :( Please reload this page."+str(e), 401)
# Use send_file() which is way faster than read/write loop on bytes
import json
from tempfile import mkstemp
from flask import send_file
tmp_fd, tmp_file = mkstemp(text=True, suffix=".json", prefix="changedetectionio-")
output = json.dumps({'screenshot': "data:image/jpeg;base64,{}".format(
base64.b64encode(state[0]).decode('ascii')),
'xpath_data': state[1],
'session_age_start': this_session.age_start,
'browser_time_remaining': round(remaining)
})
with os.fdopen(tmp_fd, 'w') as f:
f.write(output)
response = make_response(send_file(path_or_file=tmp_file,
mimetype='application/json; charset=UTF-8',
etag=True))
# No longer needed
os.unlink(tmp_file)
elif request.method == 'GET':
# Just enough to get the session rolling, it will call for goto-site via POST next
response = make_response({
'session_age_start': this_session.age_start,
'browser_time_remaining': round(remaining)
})
return response return response

View File

@@ -25,12 +25,14 @@ browser_step_ui_config = {'Choose one': '0 0',
'Execute JS': '0 1', 'Execute JS': '0 1',
# 'Extract text and use as filter': '1 0', # 'Extract text and use as filter': '1 0',
'Goto site': '0 0', 'Goto site': '0 0',
'Goto URL': '0 1',
'Press Enter': '0 0', 'Press Enter': '0 0',
'Select by label': '1 1', 'Select by label': '1 1',
'Scroll down': '0 0', 'Scroll down': '0 0',
'Uncheck checkbox': '1 0', 'Uncheck checkbox': '1 0',
'Wait for seconds': '0 1', 'Wait for seconds': '0 1',
'Wait for text': '0 1', 'Wait for text': '0 1',
'Wait for text in element': '1 1',
# 'Press Page Down': '0 0', # 'Press Page Down': '0 0',
# 'Press Page Up': '0 0', # 'Press Page Up': '0 0',
# weird bug, come back to it later # weird bug, come back to it later
@@ -53,7 +55,7 @@ class steppable_browser_interface():
print("> action calling", call_action_name) print("> action calling", call_action_name)
# https://playwright.dev/python/docs/selectors#xpath-selectors # https://playwright.dev/python/docs/selectors#xpath-selectors
if selector.startswith('/') and not selector.startswith('//'): if selector and selector.startswith('/') and not selector.startswith('//'):
selector = "xpath=" + selector selector = "xpath=" + selector
action_handler = getattr(self, "action_" + call_action_name) action_handler = getattr(self, "action_" + call_action_name)
@@ -69,21 +71,19 @@ class steppable_browser_interface():
optional_value = str(jinja2_env.from_string(optional_value).render()) optional_value = str(jinja2_env.from_string(optional_value).render())
action_handler(selector, optional_value) action_handler(selector, optional_value)
self.page.wait_for_timeout(3 * 1000) self.page.wait_for_timeout(1.5 * 1000)
print("Call action done in", time.time() - now) print("Call action done in", time.time() - now)
def action_goto_url(self, url, optional_value): def action_goto_url(self, selector=None, value=None):
# self.page.set_viewport_size({"width": 1280, "height": 5000}) # self.page.set_viewport_size({"width": 1280, "height": 5000})
now = time.time() now = time.time()
response = self.page.goto(url, timeout=0, wait_until='domcontentloaded') response = self.page.goto(value, timeout=0, wait_until='commit')
print("Time to goto URL", time.time() - now)
# Wait_until = commit # Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading. # - `'commit'` - consider operation to be finished when network response is received and the document started loading.
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds # Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
# This seemed to solve nearly all 'TimeoutErrors' # This seemed to solve nearly all 'TimeoutErrors'
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) print("Time to goto URL ", time.time() - now)
self.page.wait_for_timeout(extra_wait * 1000)
def action_click_element_containing_text(self, selector=None, value=''): def action_click_element_containing_text(self, selector=None, value=''):
if not len(value.strip()): if not len(value.strip()):
@@ -105,7 +105,8 @@ class steppable_browser_interface():
print("Clicking element") print("Clicking element")
if not len(selector.strip()): if not len(selector.strip()):
return return
self.page.click(selector, timeout=10 * 1000, delay=randint(200, 500))
self.page.click(selector=selector, timeout=30 * 1000, delay=randint(200, 500))
def action_click_element_if_exists(self, selector, value): def action_click_element_if_exists(self, selector, value):
import playwright._impl._api_types as _api_types import playwright._impl._api_types as _api_types
@@ -132,7 +133,18 @@ class steppable_browser_interface():
self.page.wait_for_timeout(1000) self.page.wait_for_timeout(1000)
def action_wait_for_seconds(self, selector, value): def action_wait_for_seconds(self, selector, value):
self.page.wait_for_timeout(int(value) * 1000) self.page.wait_for_timeout(float(value.strip()) * 1000)
def action_wait_for_text(self, selector, value):
import json
v = json.dumps(value)
self.page.wait_for_function(f'document.querySelector("body").innerText.includes({v});', timeout=90000)
def action_wait_for_text_in_element(self, selector, value):
import json
s = json.dumps(selector)
v = json.dumps(value)
self.page.wait_for_function(f'document.querySelector({s}).innerText.includes({v});', timeout=90000)
# @todo - in the future make some popout interface to capture what needs to be set # @todo - in the future make some popout interface to capture what needs to be set
# https://playwright.dev/python/docs/api/class-keyboard # https://playwright.dev/python/docs/api/class-keyboard

View File

@@ -0,0 +1,33 @@
from distutils.util import strtobool
from flask import Blueprint, flash, redirect, url_for
from flask_login import login_required
from changedetectionio.store import ChangeDetectionStore
from changedetectionio import queuedWatchMetaData
from queue import PriorityQueue
PRICE_DATA_TRACK_ACCEPT = 'accepted'
PRICE_DATA_TRACK_REJECT = 'rejected'
def construct_blueprint(datastore: ChangeDetectionStore, update_q: PriorityQueue):
price_data_follower_blueprint = Blueprint('price_data_follower', __name__)
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/accept", methods=['GET'])
def accept(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_ACCEPT
update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': False}))
return redirect(url_for("form_watch_checknow", uuid=uuid))
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/reject", methods=['GET'])
def reject(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_REJECT
return redirect(url_for("index"))
return price_data_follower_blueprint

View File

@@ -3,11 +3,14 @@
# Launch as a eventlet.wsgi server instance. # Launch as a eventlet.wsgi server instance.
from distutils.util import strtobool from distutils.util import strtobool
from json.decoder import JSONDecodeError
import eventlet import eventlet
import eventlet.wsgi import eventlet.wsgi
import getopt import getopt
import os import os
import signal import signal
import socket
import sys import sys
from . import store, changedetection_app, content_fetcher from . import store, changedetection_app, content_fetcher
@@ -28,11 +31,13 @@ def sigterm_handler(_signo, _stack_frame):
def main(): def main():
global datastore global datastore
global app global app
ssl_mode = False
host = ''
port = os.environ.get('PORT') or 5000
do_cleanup = False
datastore_path = None datastore_path = None
do_cleanup = False
host = ''
ipv6_enabled = False
port = os.environ.get('PORT') or 5000
ssl_mode = False
# On Windows, create and use a default path. # On Windows, create and use a default path.
if os.name == 'nt': if os.name == 'nt':
@@ -43,7 +48,7 @@ def main():
datastore_path = os.path.join(os.getcwd(), "../datastore") datastore_path = os.path.join(os.getcwd(), "../datastore")
try: try:
opts, args = getopt.getopt(sys.argv[1:], "Ccsd:h:p:", "port") opts, args = getopt.getopt(sys.argv[1:], "6Ccsd:h:p:", "port")
except getopt.GetoptError: except getopt.GetoptError:
print('backend.py -s SSL enable -h [host] -p [port] -d [datastore path]') print('backend.py -s SSL enable -h [host] -p [port] -d [datastore path]')
sys.exit(2) sys.exit(2)
@@ -63,6 +68,10 @@ def main():
if opt == '-d': if opt == '-d':
datastore_path = arg datastore_path = arg
if opt == '-6':
print ("Enabling IPv6 listen support")
ipv6_enabled = True
# Cleanup (remove text files that arent in the index) # Cleanup (remove text files that arent in the index)
if opt == '-c': if opt == '-c':
do_cleanup = True do_cleanup = True
@@ -83,8 +92,14 @@ def main():
"Or use the -C parameter to create the directory.".format(app_config['datastore_path']), file=sys.stderr) "Or use the -C parameter to create the directory.".format(app_config['datastore_path']), file=sys.stderr)
sys.exit(2) sys.exit(2)
try:
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=__version__)
except JSONDecodeError as e:
# Dont' start if the JSON DB looks corrupt
print ("ERROR: JSON DB or Proxy List JSON at '{}' appears to be corrupt, aborting".format(app_config['datastore_path']))
print(str(e))
return
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=__version__)
app = changedetection_app(app_config, datastore) app = changedetection_app(app_config, datastore)
signal.signal(signal.SIGTERM, sigterm_handler) signal.signal(signal.SIGTERM, sigterm_handler)
@@ -124,13 +139,15 @@ def main():
from werkzeug.middleware.proxy_fix import ProxyFix from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1) app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1)
s_type = socket.AF_INET6 if ipv6_enabled else socket.AF_INET
if ssl_mode: if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it. # @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((host, port)), eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((host, port), s_type),
certfile='cert.pem', certfile='cert.pem',
keyfile='privkey.pem', keyfile='privkey.pem',
server_side=True), app) server_side=True), app)
else: else:
eventlet.wsgi.server(eventlet.listen((host, int(port))), app) eventlet.wsgi.server(eventlet.listen((host, int(port)), s_type), app)

View File

@@ -1,3 +1,4 @@
import hashlib
from abc import abstractmethod from abc import abstractmethod
import chardet import chardet
import json import json
@@ -9,6 +10,7 @@ import time
visualselector_xpath_selectors = 'div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary' visualselector_xpath_selectors = 'div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary'
class Non200ErrorCodeReceived(Exception): class Non200ErrorCodeReceived(Exception):
def __init__(self, status_code, url, screenshot=None, xpath_data=None, page_html=None): def __init__(self, status_code, url, screenshot=None, xpath_data=None, page_html=None):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -24,6 +26,11 @@ class Non200ErrorCodeReceived(Exception):
return return
class checksumFromPreviousCheckWasTheSame(Exception):
def __init__(self):
return
class JSActionExceptions(Exception): class JSActionExceptions(Exception):
def __init__(self, status_code, url, screenshot, message=''): def __init__(self, status_code, url, screenshot, message=''):
self.status_code = status_code self.status_code = status_code
@@ -32,6 +39,7 @@ class JSActionExceptions(Exception):
self.message = message self.message = message
return return
class BrowserStepsStepTimout(Exception): class BrowserStepsStepTimout(Exception):
def __init__(self, step_n): def __init__(self, step_n):
self.step_n = step_n self.step_n = step_n
@@ -39,7 +47,7 @@ class BrowserStepsStepTimout(Exception):
class PageUnloadable(Exception): class PageUnloadable(Exception):
def __init__(self, status_code, url, screenshot=False, message=False): def __init__(self, status_code, url, message, screenshot=False):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
self.status_code = status_code self.status_code = status_code
self.url = url self.url = url
@@ -47,6 +55,7 @@ class PageUnloadable(Exception):
self.message = message self.message = message
return return
class EmptyReply(Exception): class EmptyReply(Exception):
def __init__(self, status_code, url, screenshot=None): def __init__(self, status_code, url, screenshot=None):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -55,6 +64,7 @@ class EmptyReply(Exception):
self.screenshot = screenshot self.screenshot = screenshot
return return
class ScreenshotUnavailable(Exception): class ScreenshotUnavailable(Exception):
def __init__(self, status_code, url, page_html=None): def __init__(self, status_code, url, page_html=None):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -65,6 +75,7 @@ class ScreenshotUnavailable(Exception):
self.page_text = html_to_text(page_html) self.page_text = html_to_text(page_html)
return return
class ReplyWithContentButNoText(Exception): class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url, screenshot=None): def __init__(self, status_code, url, screenshot=None):
# Set this so we can use it in other parts of the app # Set this so we can use it in other parts of the app
@@ -73,19 +84,20 @@ class ReplyWithContentButNoText(Exception):
self.screenshot = screenshot self.screenshot = screenshot
return return
class Fetcher(): class Fetcher():
error = None
status_code = None
content = None
headers = None
browser_steps = None browser_steps = None
browser_steps_screenshot_path = None browser_steps_screenshot_path = None
content = None
error = None
fetcher_description = "No description" fetcher_description = "No description"
headers = {}
status_code = None
webdriver_js_execute_code = None webdriver_js_execute_code = None
xpath_element_js = ""
xpath_data = None xpath_data = None
xpath_element_js = ""
instock_data = None
instock_data_js = ""
# Will be needed in the future by the VisualSelector, always get this where possible. # Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False screenshot = False
@@ -99,7 +111,7 @@ class Fetcher():
from pkg_resources import resource_string from pkg_resources import resource_string
# The code that scrapes elements and makes a list of elements/size/position to click on in the VisualSelector # The code that scrapes elements and makes a list of elements/size/position to click on in the VisualSelector
self.xpath_element_js = resource_string(__name__, "res/xpath_element_scraper.js").decode('utf-8') self.xpath_element_js = resource_string(__name__, "res/xpath_element_scraper.js").decode('utf-8')
self.instock_data_js = resource_string(__name__, "res/stock-not-in-stock.js").decode('utf-8')
@abstractmethod @abstractmethod
def get_error(self): def get_error(self):
@@ -113,7 +125,8 @@ class Fetcher():
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False,
current_include_filters=None): current_include_filters=None,
is_binary=False):
# Should set self.error, self.status_code and self.content # Should set self.error, self.status_code and self.content
pass pass
@@ -134,6 +147,13 @@ class Fetcher():
def is_ready(self): def is_ready(self):
return True return True
def get_all_headers(self):
"""
Get all headers but ensure all keys are lowercase
:return:
"""
return {k.lower(): v for k, v in self.headers.items()}
def iterate_browser_steps(self): def iterate_browser_steps(self):
from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface
from playwright._impl._api_types import TimeoutError from playwright._impl._api_types import TimeoutError
@@ -146,13 +166,15 @@ class Fetcher():
interface = steppable_browser_interface() interface = steppable_browser_interface()
interface.page = self.page interface.page = self.page
valid_steps = filter(lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one' and s['operation'] != 'Goto site'), self.browser_steps) valid_steps = filter(
lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one' and s['operation'] != 'Goto site'),
self.browser_steps)
for step in valid_steps: for step in valid_steps:
step_n += 1 step_n += 1
print(">> Iterating check - browser Step n {} - {}...".format(step_n, step['operation'])) print(">> Iterating check - browser Step n {} - {}...".format(step_n, step['operation']))
self.screenshot_step("before-"+str(step_n)) self.screenshot_step("before-" + str(step_n))
self.save_step_html("before-"+str(step_n)) self.save_step_html("before-" + str(step_n))
try: try:
optional_value = step['optional_value'] optional_value = step['optional_value']
selector = step['selector'] selector = step['selector']
@@ -167,12 +189,11 @@ class Fetcher():
optional_value=optional_value) optional_value=optional_value)
self.screenshot_step(step_n) self.screenshot_step(step_n)
self.save_step_html(step_n) self.save_step_html(step_n)
except TimeoutError: except TimeoutError as e:
print(str(e))
# Stop processing here # Stop processing here
raise BrowserStepsStepTimout(step_n=step_n) raise BrowserStepsStepTimout(step_n=step_n)
# It's always good to reset these # It's always good to reset these
def delete_browser_steps_screenshots(self): def delete_browser_steps_screenshots(self):
import glob import glob
@@ -182,6 +203,7 @@ class Fetcher():
for f in files: for f in files:
os.unlink(f) os.unlink(f)
# Maybe for the future, each fetcher provides its own diff output, could be used for text, image # Maybe for the future, each fetcher provides its own diff output, could be used for text, image
# the current one would return javascript output (as we use JS to generate the diff) # the current one would return javascript output (as we use JS to generate the diff)
# #
@@ -199,6 +221,7 @@ def available_fetchers():
return p return p
class base_html_playwright(Fetcher): class base_html_playwright(Fetcher):
fetcher_description = "Playwright {}/Javascript".format( fetcher_description = "Playwright {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize() os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
@@ -238,10 +261,15 @@ class base_html_playwright(Fetcher):
if proxy_override: if proxy_override:
self.proxy = {'server': proxy_override} self.proxy = {'server': proxy_override}
def screenshot_step(self, step_n=''): if self.proxy:
# Playwright needs separate username and password values
from urllib.parse import urlparse
parsed = urlparse(self.proxy.get('server'))
if parsed.username:
self.proxy['username'] = parsed.username
self.proxy['password'] = parsed.password
# There's a bug where we need to do it twice or it doesnt take the whole page, dont know why. def screenshot_step(self, step_n=''):
self.page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024})
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=85) screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=85)
if self.browser_steps_screenshot_path is not None: if self.browser_steps_screenshot_path is not None:
@@ -257,6 +285,119 @@ class base_html_playwright(Fetcher):
with open(destination, 'w') as f: with open(destination, 'w') as f:
f.write(content) f.write(content)
def run_fetch_browserless_puppeteer(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False):
from pkg_resources import resource_string
extra_wait_ms = (int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay) * 1000
self.xpath_element_js = self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors)
code = resource_string(__name__, "res/puppeteer_fetch.js").decode('utf-8')
# In the future inject this is a proper JS package
code = code.replace('%xpath_scrape_code%', self.xpath_element_js)
code = code.replace('%instock_scrape_code%', self.instock_data_js)
from requests.exceptions import ConnectTimeout, ReadTimeout
wait_browserless_seconds = 240
browserless_function_url = os.getenv('BROWSERLESS_FUNCTION_URL')
from urllib.parse import urlparse
if not browserless_function_url:
# Convert/try to guess from PLAYWRIGHT_DRIVER_URL
o = urlparse(os.getenv('PLAYWRIGHT_DRIVER_URL'))
browserless_function_url = o._replace(scheme="http")._replace(path="function").geturl()
# Append proxy connect string
if self.proxy:
import urllib.parse
# Remove username/password if it exists in the URL or you will receive "ERR_NO_SUPPORTED_PROXIES" error
# Actual authentication handled by Puppeteer/node
o = urlparse(self.proxy.get('server'))
proxy_url = urllib.parse.quote(o._replace(netloc="{}:{}".format(o.hostname, o.port)).geturl())
browserless_function_url = f"{browserless_function_url}&--proxy-server={proxy_url}&dumpio=true"
try:
amp = '&' if '?' in browserless_function_url else '?'
response = requests.request(
method="POST",
json={
"code": code,
"context": {
# Very primitive disk cache - USE WITH EXTREME CAUTION
# Run browserless container with -e "FUNCTION_BUILT_INS=[\"fs\",\"crypto\"]"
'disk_cache_dir': os.getenv("PUPPETEER_DISK_CACHE", False), # or path to disk cache ending in /, ie /tmp/cache/
'execute_js': self.webdriver_js_execute_code,
'extra_wait_ms': extra_wait_ms,
'include_filters': current_include_filters,
'req_headers': request_headers,
'screenshot_quality': int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72)),
'url': url,
'user_agent': request_headers.get('User-Agent', 'Mozilla/5.0'),
'proxy_username': self.proxy.get('username','') if self.proxy else False,
'proxy_password': self.proxy.get('password', '') if self.proxy else False,
'no_cache_list': [
'twitter',
'.pdf'
],
# Could use https://github.com/easylist/easylist here, or install a plugin
'block_url_list': [
'adnxs.com',
'analytics.twitter.com',
'doubleclick.net',
'google-analytics.com',
'googletagmanager',
'trustpilot.com'
]
}
},
# @todo /function needs adding ws:// to http:// rebuild this
url=browserless_function_url+f"{amp}--disable-features=AudioServiceOutOfProcess&dumpio=true&--disable-remote-fonts",
timeout=wait_browserless_seconds)
except ReadTimeout:
raise PageUnloadable(url=url, status_code=None, message=f"No response from browserless in {wait_browserless_seconds}s")
except ConnectTimeout:
raise PageUnloadable(url=url, status_code=None, message=f"Timed out connecting to browserless, retrying..")
else:
# 200 Here means that the communication to browserless worked only, not the page state
if response.status_code == 200:
import base64
x = response.json()
if not x.get('screenshot'):
# https://github.com/puppeteer/puppeteer/blob/v1.0.0/docs/troubleshooting.md#tips
# https://github.com/puppeteer/puppeteer/issues/1834
# https://github.com/puppeteer/puppeteer/issues/1834#issuecomment-381047051
# Check your memory is shared and big enough
raise ScreenshotUnavailable(url=url, status_code=None)
if not x.get('content', '').strip():
raise EmptyReply(url=url, status_code=None)
if x.get('status_code', 200) != 200 and not ignore_status_codes:
raise Non200ErrorCodeReceived(url=url, status_code=x.get('status_code', 200), page_html=x['content'])
self.content = x.get('content')
self.headers = x.get('headers')
self.instock_data = x.get('instock_data')
self.screenshot = base64.b64decode(x.get('screenshot'))
self.status_code = x.get('status_code')
self.xpath_data = x.get('xpath_data')
else:
# Some other error from browserless
raise PageUnloadable(url=url, status_code=None, message=response.content.decode('utf-8'))
def run(self, def run(self,
url, url,
timeout, timeout,
@@ -264,7 +405,26 @@ class base_html_playwright(Fetcher):
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False,
current_include_filters=None): current_include_filters=None,
is_binary=False):
# For now, USE_EXPERIMENTAL_PUPPETEER_FETCH is not supported by watches with BrowserSteps (for now!)
has_browser_steps = self.browser_steps and list(filter(
lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one' and s['operation'] != 'Goto site'),
self.browser_steps))
if not has_browser_steps:
if os.getenv('USE_EXPERIMENTAL_PUPPETEER_FETCH'):
# Temporary backup solution until we rewrite the playwright code
return self.run_fetch_browserless_puppeteer(
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes,
current_include_filters,
is_binary)
from playwright.sync_api import sync_playwright from playwright.sync_api import sync_playwright
import playwright._impl._api_types import playwright._impl._api_types
@@ -282,10 +442,12 @@ class base_html_playwright(Fetcher):
# Set user agent to prevent Cloudflare from blocking the browser # Set user agent to prevent Cloudflare from blocking the browser
# Use the default one configured in the App.py model that's passed from fetch_site_status.py # Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context( context = browser.new_context(
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0', user_agent=request_headers.get('User-Agent', 'Mozilla/5.0'),
proxy=self.proxy, proxy=self.proxy,
# This is needed to enable JavaScript execution on GitHub and others # This is needed to enable JavaScript execution on GitHub and others
bypass_csp=True, bypass_csp=True,
# Should be `allow` or `block` - sites like YouTube can transmit large amounts of data via Service Workers
service_workers=os.getenv('PLAYWRIGHT_SERVICE_WORKERS', 'allow'),
# Should never be needed # Should never be needed
accept_downloads=False accept_downloads=False
) )
@@ -294,24 +456,34 @@ class base_html_playwright(Fetcher):
if len(request_headers): if len(request_headers):
context.set_extra_http_headers(request_headers) context.set_extra_http_headers(request_headers)
try:
self.page.set_default_navigation_timeout(90000) self.page.set_default_navigation_timeout(90000)
self.page.set_default_timeout(90000) self.page.set_default_timeout(90000)
# Listen for all console events and handle errors # Listen for all console events and handle errors
self.page.on("console", lambda msg: print(f"Playwright console: Watch URL: {url} {msg.type}: {msg.text} {msg.args}")) self.page.on("console", lambda msg: print(f"Playwright console: Watch URL: {url} {msg.type}: {msg.text} {msg.args}"))
# Bug - never set viewport size BEFORE page.goto # Goto page
try:
# Waits for the next navigation. Using Python context manager
# prevents a race condition between clicking and waiting for a navigation.
with self.page.expect_navigation():
response = self.page.goto(url, wait_until='load')
# Wait_until = commit # Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading. # - `'commit'` - consider operation to be finished when network response is received and the document started loading.
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds # Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
# This seemed to solve nearly all 'TimeoutErrors' # This seemed to solve nearly all 'TimeoutErrors'
response = self.page.goto(url, wait_until='commit')
except playwright._impl._api_types.Error as e:
# Retry once - https://github.com/browserless/chrome/issues/2485
# Sometimes errors related to invalid cert's and other can be random
print("Content Fetcher > retrying request got error - ", str(e))
time.sleep(1)
response = self.page.goto(url, wait_until='commit')
except Exception as e:
print("Content Fetcher > Other exception when page.goto", str(e))
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
# Execute any browser steps
try:
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
self.page.wait_for_timeout(extra_wait * 1000) self.page.wait_for_timeout(extra_wait * 1000)
@@ -324,43 +496,32 @@ class base_html_playwright(Fetcher):
# This can be ok, we will try to grab what we could retrieve # This can be ok, we will try to grab what we could retrieve
pass pass
except Exception as e: except Exception as e:
print ("other exception when page.goto") print("Content Fetcher > Other exception when executing custom JS code", str(e))
print (str(e))
context.close() context.close()
browser.close() browser.close()
raise PageUnloadable(url=url, status_code=None) raise PageUnloadable(url=url, status_code=None, message=str(e))
if response is None: if response is None:
context.close() context.close()
browser.close() browser.close()
print ("response object was none") print("Content Fetcher > Response object was none")
raise EmptyReply(url=url, status_code=None) raise EmptyReply(url=url, status_code=None)
# Bug 2(?) Set the viewport size AFTER loading the page
self.page.set_viewport_size({"width": 1280, "height": 1024})
# Run Browser Steps here # Run Browser Steps here
self.iterate_browser_steps() self.iterate_browser_steps()
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
time.sleep(extra_wait) time.sleep(extra_wait)
self.content = self.page.content() self.content = self.page.content()
self.status_code = response.status self.status_code = response.status
if len(self.page.content().strip()) == 0: if len(self.page.content().strip()) == 0:
context.close() context.close()
browser.close() browser.close()
print ("Content was empty") print("Content Fetcher > Content was empty")
raise EmptyReply(url=url, status_code=None) raise EmptyReply(url=url, status_code=response.status)
# Bug 2(?) Set the viewport size AFTER loading the page
self.page.set_viewport_size({"width": 1280, "height": 1024})
self.status_code = response.status self.status_code = response.status
self.content = self.page.content()
self.headers = response.all_headers() self.headers = response.all_headers()
# So we can find an element on the page where its selector was entered manually (maybe not xPath etc) # So we can find an element on the page where its selector was entered manually (maybe not xPath etc)
@@ -369,7 +530,9 @@ class base_html_playwright(Fetcher):
else: else:
self.page.evaluate("var include_filters=''") self.page.evaluate("var include_filters=''")
self.xpath_data = self.page.evaluate("async () => {" + self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors) + "}") self.xpath_data = self.page.evaluate(
"async () => {" + self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors) + "}")
self.instock_data = self.page.evaluate("async () => {" + self.instock_data_js + "}")
# Bug 3 in Playwright screenshot handling # Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it # Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
@@ -379,10 +542,9 @@ class base_html_playwright(Fetcher):
# which will significantly increase the IO size between the server and client, it's recommended to use the lowest # which will significantly increase the IO size between the server and client, it's recommended to use the lowest
# acceptable screenshot quality here # acceptable screenshot quality here
try: try:
# Quality set to 1 because it's not used, just used as a work-around for a bug, no need to change this.
self.page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024}, quality=1)
# The actual screenshot # The actual screenshot
self.screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72))) self.screenshot = self.page.screenshot(type='jpeg', full_page=True,
quality=int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72)))
except Exception as e: except Exception as e:
context.close() context.close()
browser.close() browser.close()
@@ -391,6 +553,7 @@ class base_html_playwright(Fetcher):
context.close() context.close()
browser.close() browser.close()
class base_html_webdriver(Fetcher): class base_html_webdriver(Fetcher):
if os.getenv("WEBDRIVER_URL"): if os.getenv("WEBDRIVER_URL"):
fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL")) fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL"))
@@ -440,7 +603,8 @@ class base_html_webdriver(Fetcher):
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False,
current_include_filters=None): current_include_filters=None,
is_binary=False):
from selenium import webdriver from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
@@ -498,7 +662,7 @@ class base_html_webdriver(Fetcher):
try: try:
self.driver.quit() self.driver.quit()
except Exception as e: except Exception as e:
print("Exception in chrome shutdown/quit" + str(e)) print("Content Fetcher > Exception in chrome shutdown/quit" + str(e))
# "html_requests" is listed as the default fetcher in store.py! # "html_requests" is listed as the default fetcher in store.py!
@@ -515,7 +679,8 @@ class html_requests(Fetcher):
request_body, request_body,
request_method, request_method,
ignore_status_codes=False, ignore_status_codes=False,
current_include_filters=None): current_include_filters=None,
is_binary=False):
# Make requests use a more modern looking user-agent # Make requests use a more modern looking user-agent
if not 'User-Agent' in request_headers: if not 'User-Agent' in request_headers:
@@ -545,10 +710,12 @@ class html_requests(Fetcher):
# For example - some sites don't tell us it's utf-8, but return utf-8 content # For example - some sites don't tell us it's utf-8, but return utf-8 content
# This seems to not occur when using webdriver/selenium, it seems to detect the text encoding more reliably. # This seems to not occur when using webdriver/selenium, it seems to detect the text encoding more reliably.
# https://github.com/psf/requests/issues/1604 good info about requests encoding detection # https://github.com/psf/requests/issues/1604 good info about requests encoding detection
if not r.headers.get('content-type') or not 'charset=' in r.headers.get('content-type'): if not is_binary:
encoding = chardet.detect(r.content)['encoding'] # Don't run this for PDF (and requests identified as binary) takes a _long_ time
if encoding: if not r.headers.get('content-type') or not 'charset=' in r.headers.get('content-type'):
r.encoding = encoding encoding = chardet.detect(r.content)['encoding']
if encoding:
r.encoding = encoding
if not r.content or not len(r.content): if not r.content or not len(r.content):
raise EmptyReply(url=url, status_code=r.status_code) raise EmptyReply(url=url, status_code=r.status_code)
@@ -560,8 +727,14 @@ class html_requests(Fetcher):
raise Non200ErrorCodeReceived(url=url, status_code=r.status_code, page_html=r.text) raise Non200ErrorCodeReceived(url=url, status_code=r.status_code, page_html=r.text)
self.status_code = r.status_code self.status_code = r.status_code
self.content = r.text if is_binary:
# Binary files just return their checksum until we add something smarter
self.content = hashlib.md5(r.content).hexdigest()
else:
self.content = r.text
self.headers = r.headers self.headers = r.headers
self.raw_content = r.content
# Decide which is the 'real' HTML webdriver, this is more a system wide config # Decide which is the 'real' HTML webdriver, this is more a system wide config

View File

@@ -1,14 +0,0 @@
FROM python:3.8-slim
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN [ ! -d "/datastore" ] && mkdir /datastore
COPY sleep.py /
CMD [ "python", "/sleep.py" ]

View File

@@ -1,7 +0,0 @@
import time
print ("Sleep loop, you should run your script from the console")
while True:
# Wait for 5 seconds
time.sleep(2)

View File

@@ -10,7 +10,7 @@ def same_slicer(l, a, b):
return l[a:b] return l[a:b]
# like .compare but a little different output # like .compare but a little different output
def customSequenceMatcher(before, after, include_equal=False): def customSequenceMatcher(before, after, include_equal=False, include_removed=True, include_added=True, include_replaced=True, include_change_type_prefix=True):
cruncher = difflib.SequenceMatcher(isjunk=lambda x: x in " \\t", a=before, b=after) cruncher = difflib.SequenceMatcher(isjunk=lambda x: x in " \\t", a=before, b=after)
# @todo Line-by-line mode instead of buncghed, including `after` that is not in `before` (maybe unset?) # @todo Line-by-line mode instead of buncghed, including `after` that is not in `before` (maybe unset?)
@@ -18,34 +18,39 @@ def customSequenceMatcher(before, after, include_equal=False):
if include_equal and tag == 'equal': if include_equal and tag == 'equal':
g = before[alo:ahi] g = before[alo:ahi]
yield g yield g
elif tag == 'delete': elif include_removed and tag == 'delete':
g = ["(removed) " + i for i in same_slicer(before, alo, ahi)] row_prefix = "(removed) " if include_change_type_prefix else ''
g = [ row_prefix + i for i in same_slicer(before, alo, ahi)]
yield g yield g
elif tag == 'replace': elif include_replaced and tag == 'replace':
g = ["(changed) " + i for i in same_slicer(before, alo, ahi)] row_prefix = "(changed) " if include_change_type_prefix else ''
g += ["(into ) " + i for i in same_slicer(after, blo, bhi)] g = [row_prefix + i for i in same_slicer(before, alo, ahi)]
row_prefix = "(into) " if include_change_type_prefix else ''
g += [row_prefix + i for i in same_slicer(after, blo, bhi)]
yield g yield g
elif tag == 'insert': elif include_added and tag == 'insert':
g = ["(added ) " + i for i in same_slicer(after, blo, bhi)] row_prefix = "(added) " if include_change_type_prefix else ''
g = [row_prefix + i for i in same_slicer(after, blo, bhi)]
yield g yield g
# only_differences - only return info about the differences, no context # only_differences - only return info about the differences, no context
# line_feed_sep could be "<br/>" or "<li>" or "\n" etc # line_feed_sep could be "<br>" or "<li>" or "\n" etc
def render_diff(previous_file, newest_file, include_equal=False, line_feed_sep="\n"): def render_diff(previous_version_file_contents, newest_version_file_contents, include_equal=False, include_removed=True, include_added=True, include_replaced=True, line_feed_sep="\n", include_change_type_prefix=True):
with open(newest_file, 'r') as f:
newest_version_file_contents = f.read()
newest_version_file_contents = [line.rstrip() for line in newest_version_file_contents.splitlines()]
if previous_file: newest_version_file_contents = [line.rstrip() for line in newest_version_file_contents.splitlines()]
with open(previous_file, 'r') as f:
previous_version_file_contents = f.read() if previous_version_file_contents:
previous_version_file_contents = [line.rstrip() for line in previous_version_file_contents.splitlines()] previous_version_file_contents = [line.rstrip() for line in previous_version_file_contents.splitlines()]
else: else:
previous_version_file_contents = "" previous_version_file_contents = ""
rendered_diff = customSequenceMatcher(previous_version_file_contents, rendered_diff = customSequenceMatcher(before=previous_version_file_contents,
newest_version_file_contents, after=newest_version_file_contents,
include_equal) include_equal=include_equal,
include_removed=include_removed,
include_added=include_added,
include_replaced=include_replaced,
include_change_type_prefix=include_change_type_prefix)
# Recursively join lists # Recursively join lists
f = lambda L: line_feed_sep.join([f(x) if type(x) is list else x for x in L]) f = lambda L: line_feed_sep.join([f(x) if type(x) is list else x for x in L])

View File

@@ -138,7 +138,7 @@ class ValidateContentFetcherIsReady(object):
from changedetectionio import content_fetcher from changedetectionio import content_fetcher
# Better would be a radiohandler that keeps a reference to each class # Better would be a radiohandler that keeps a reference to each class
if field.data is not None: if field.data is not None and field.data != 'system':
klass = getattr(content_fetcher, field.data) klass = getattr(content_fetcher, field.data)
some_object = klass() some_object = klass()
try: try:
@@ -147,12 +147,12 @@ class ValidateContentFetcherIsReady(object):
except urllib3.exceptions.MaxRetryError as e: except urllib3.exceptions.MaxRetryError as e:
driver_url = some_object.command_executor driver_url = some_object.command_executor
message = field.gettext('Content fetcher \'%s\' did not respond.' % (field.data)) message = field.gettext('Content fetcher \'%s\' did not respond.' % (field.data))
message += '<br/>' + field.gettext( message += '<br>' + field.gettext(
'Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.') 'Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.')
message += '<br/>' + field.gettext('Did you follow the instructions in the wiki?') message += '<br>' + field.gettext('Did you follow the instructions in the wiki?')
message += '<br/><br/>' + field.gettext('WebDriver Host: %s' % (driver_url)) message += '<br><br>' + field.gettext('WebDriver Host: %s' % (driver_url))
message += '<br/><a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">Go here for more information</a>' message += '<br><a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">Go here for more information</a>'
message += '<br/>'+field.gettext('Content fetcher did not respond properly, unable to use it.\n %s' % (str(e))) message += '<br>'+field.gettext('Content fetcher did not respond properly, unable to use it.\n %s' % (str(e)))
raise ValidationError(message) raise ValidationError(message)
@@ -232,12 +232,17 @@ class validateURL(object):
def __call__(self, form, field): def __call__(self, form, field):
import validators import validators
try: try:
validators.url(field.data.strip()) validators.url(field.data.strip())
except validators.ValidationFailure: except validators.ValidationFailure:
message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip())) message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip()))
raise ValidationError(message) raise ValidationError(message)
from .model.Watch import is_safe_url
if not is_safe_url(field.data):
raise ValidationError('Watch protocol is not permitted by SAFE_PROTOCOL_REGEX')
class ValidateListRegex(object): class ValidateListRegex(object):
""" """
@@ -339,9 +344,12 @@ class ValidateCSSJSONXPATHInput(object):
raise ValidationError("A system-error occurred when validating your jq expression") raise ValidationError("A system-error occurred when validating your jq expression")
class quickWatchForm(Form): class quickWatchForm(Form):
from . import processors
url = fields.URLField('URL', validators=[validateURL()]) url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()]) tag = StringField('Group tag', [validators.Optional()])
watch_submit_button = SubmitField('Watch', render_kw={"class": "pure-button pure-button-primary"}) watch_submit_button = SubmitField('Watch', render_kw={"class": "pure-button pure-button-primary"})
processor = RadioField(u'Processor', choices=processors.available_processors(), default="text_json_diff")
edit_and_watch_submit_button = SubmitField('Edit > Watch', render_kw={"class": "pure-button pure-button-primary"}) edit_and_watch_submit_button = SubmitField('Edit > Watch', render_kw={"class": "pure-button pure-button-primary"})
@@ -355,6 +363,10 @@ class commonSettingsForm(Form):
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False) extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False)
webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1, webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1,
message="Should contain one or more seconds")]) message="Should contain one or more seconds")])
class importForm(Form):
from . import processors
processor = RadioField(u'Processor', choices=processors.available_processors(), default="text_json_diff")
urls = TextAreaField('URLs')
class SingleBrowserStep(Form): class SingleBrowserStep(Form):
@@ -387,11 +399,19 @@ class watchForm(commonSettingsForm):
body = TextAreaField('Request body', [validators.Optional()]) body = TextAreaField('Request body', [validators.Optional()])
method = SelectField('Request method', choices=valid_method, default=default_method) method = SelectField('Request method', choices=valid_method, default=default_method)
ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False) ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False)
check_unique_lines = BooleanField('Only trigger when new lines appear', default=False) check_unique_lines = BooleanField('Only trigger when unique lines appear', default=False)
filter_text_added = BooleanField('Added lines', default=True)
filter_text_replaced = BooleanField('Replaced/changed lines', default=True)
filter_text_removed = BooleanField('Removed lines', default=True)
# @todo this class could be moved to its own text_json_diff_watchForm and this goes to restock_diff_Watchform perhaps
in_stock_only = BooleanField('Only trigger when product goes BACK to in-stock', default=True)
trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()]) trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()])
if os.getenv("PLAYWRIGHT_DRIVER_URL"): if os.getenv("PLAYWRIGHT_DRIVER_URL"):
browser_steps = FieldList(FormField(SingleBrowserStep), min_entries=10) browser_steps = FieldList(FormField(SingleBrowserStep), min_entries=10)
text_should_not_be_present = StringListField('Block change-detection if text matches', [validators.Optional(), ValidateListRegex()]) text_should_not_be_present = StringListField('Block change-detection while text matches', [validators.Optional(), ValidateListRegex()])
webdriver_js_execute_code = TextAreaField('Execute JavaScript before change detection', render_kw={"rows": "5"}, validators=[validators.Optional()]) webdriver_js_execute_code = TextAreaField('Execute JavaScript before change detection', render_kw={"rows": "5"}, validators=[validators.Optional()])
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"}) save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})
@@ -426,6 +446,13 @@ class watchForm(commonSettingsForm):
return result return result
class SingleExtraProxy(Form):
# maybe better to set some <script>var..
proxy_name = StringField('Name', [validators.Optional()], render_kw={"placeholder": "Name"})
proxy_url = StringField('Proxy URL', [validators.Optional()], render_kw={"placeholder": "http://user:pass@...:3128", "size":50})
# @todo do the validation here instead
# datastore.data['settings']['requests'].. # datastore.data['settings']['requests']..
class globalSettingsRequestForm(Form): class globalSettingsRequestForm(Form):
time_between_check = FormField(TimeBetweenCheckForm) time_between_check = FormField(TimeBetweenCheckForm)
@@ -433,21 +460,34 @@ class globalSettingsRequestForm(Form):
jitter_seconds = IntegerField('Random jitter seconds ± check', jitter_seconds = IntegerField('Random jitter seconds ± check',
render_kw={"style": "width: 5em;"}, render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")]) validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")])
extra_proxies = FieldList(FormField(SingleExtraProxy), min_entries=5)
def validate_extra_proxies(self, extra_validators=None):
for e in self.data['extra_proxies']:
if e.get('proxy_name') or e.get('proxy_url'):
if not e.get('proxy_name','').strip() or not e.get('proxy_url','').strip():
self.extra_proxies.errors.append('Both a name, and a Proxy URL is required.')
return False
# datastore.data['settings']['application'].. # datastore.data['settings']['application']..
class globalSettingsApplicationForm(commonSettingsForm): class globalSettingsApplicationForm(commonSettingsForm):
base_url = StringField('Base URL', validators=[validators.Optional()])
global_subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
global_ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
ignore_whitespace = BooleanField('Ignore whitespace')
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
api_access_token_enabled = BooleanField('API access token security check enabled', default=True, validators=[validators.Optional()]) api_access_token_enabled = BooleanField('API access token security check enabled', default=True, validators=[validators.Optional()])
base_url = StringField('Base URL', validators=[validators.Optional()])
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
global_ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
global_subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
ignore_whitespace = BooleanField('Ignore whitespace')
password = SaltyPasswordField() password = SaltyPasswordField()
pager_size = IntegerField('Pager size',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message="Should be atleast zero (disabled)")])
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
shared_diff_access = BooleanField('Allow access to view diff page when password is enabled', default=False, validators=[validators.Optional()])
filter_failure_notification_threshold_attempts = IntegerField('Number of times the filter can be missing before sending a notification', filter_failure_notification_threshold_attempts = IntegerField('Number of times the filter can be missing before sending a notification',
render_kw={"style": "width: 5em;"}, render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0, validators=[validators.NumberRange(min=0,

View File

@@ -8,7 +8,11 @@ import json
import re import re
# HTML added to be sure each result matching a filter (.example) gets converted to a new line by Inscriptis # HTML added to be sure each result matching a filter (.example) gets converted to a new line by Inscriptis
TEXT_FILTER_LIST_LINE_SUFFIX = "<br/>" TEXT_FILTER_LIST_LINE_SUFFIX = "<br>"
# 'price' , 'lowPrice', 'highPrice' are usually under here
# all of those may or may not appear on different websites
LD_JSON_PRODUCT_OFFER_SELECTOR = "json:$..offers"
class JSONNotFound(ValueError): class JSONNotFound(ValueError):
def __init__(self, msg): def __init__(self, msg):
@@ -127,37 +131,54 @@ def _get_stripped_text_from_json_match(match):
return stripped_text_from_html return stripped_text_from_html
def extract_json_as_string(content, json_filter): # content - json
# json_filter - ie json:$..price
# ensure_is_ldjson_info_type - str "product", optional, "@type == product" (I dont know how to do that as a json selector)
def extract_json_as_string(content, json_filter, ensure_is_ldjson_info_type=None):
stripped_text_from_html = False stripped_text_from_html = False
# Try to parse/filter out the JSON, if we get some parser error, then maybe it's embedded <script type=ldjson> # Try to parse/filter out the JSON, if we get some parser error, then maybe it's embedded within HTML tags
try: try:
stripped_text_from_html = _parse_json(json.loads(content), json_filter) stripped_text_from_html = _parse_json(json.loads(content), json_filter)
except json.JSONDecodeError: except json.JSONDecodeError:
# Foreach <script json></script> blob.. just return the first that matches json_filter # Foreach <script json></script> blob.. just return the first that matches json_filter
# As a last resort, try to parse the whole <body>
s = [] s = []
soup = BeautifulSoup(content, 'html.parser') soup = BeautifulSoup(content, 'html.parser')
bs_result = soup.findAll('script')
if not bs_result: if ensure_is_ldjson_info_type:
raise JSONNotFound("No parsable JSON found in this document") bs_result = soup.findAll('script', {"type": "application/ld+json"})
else:
bs_result = soup.findAll('script')
bs_result += soup.findAll('body')
bs_jsons = []
for result in bs_result: for result in bs_result:
# Skip empty tags, and things that dont even look like JSON # Skip empty tags, and things that dont even look like JSON
if not result.string or not '{' in result.string: if not result.text or '{' not in result.text:
continue continue
try: try:
json_data = json.loads(result.string) json_data = json.loads(result.text)
bs_jsons.append(json_data)
except json.JSONDecodeError: except json.JSONDecodeError:
# Just skip it # Skip objects which cannot be parsed
continue continue
else:
stripped_text_from_html = _parse_json(json_data, json_filter) if not bs_jsons:
if stripped_text_from_html: raise JSONNotFound("No parsable JSON found in this document")
break
for json_data in bs_jsons:
stripped_text_from_html = _parse_json(json_data, json_filter)
if ensure_is_ldjson_info_type:
# Could sometimes be list, string or something else random
if isinstance(json_data, dict):
# If it has LD JSON 'key' @type, and @type is 'product', and something was found for the search
# (Some sites have multiple of the same ld+json @type='product', but some have the review part, some have the 'price' part)
if json_data.get('@type', False) and json_data.get('@type','').lower() == ensure_is_ldjson_info_type.lower() and stripped_text_from_html:
break
elif stripped_text_from_html:
break
if not stripped_text_from_html: if not stripped_text_from_html:
# Re 265 - Just return an empty string when filter not found # Re 265 - Just return an empty string when filter not found
@@ -243,6 +264,18 @@ def html_to_text(html_content: str, render_anchor_tag_content=False) -> str:
return text_content return text_content
# Does LD+JSON exist with a @type=='product' and a .price set anywhere?
def has_ldjson_product_info(content):
try:
pricing_data = extract_json_as_string(content=content, json_filter=LD_JSON_PRODUCT_OFFER_SELECTOR, ensure_is_ldjson_info_type="product")
except JSONNotFound as e:
# Totally fine
return False
x=bool(pricing_data)
return x
def workarounds_for_obfuscations(content): def workarounds_for_obfuscations(content):
""" """
Some sites are using sneaky tactics to make prices and other information un-renderable by Inscriptis Some sites are using sneaky tactics to make prices and other information un-renderable by Inscriptis
@@ -257,3 +290,18 @@ def workarounds_for_obfuscations(content):
content = re.sub('<!--\s+-->', '', content) content = re.sub('<!--\s+-->', '', content)
return content return content
def get_triggered_text(content, trigger_text):
triggered_text = []
result = strip_ignore_text(content=content,
wordlist=trigger_text,
mode="line numbers")
i = 1
for p in content.splitlines():
if i in result:
triggered_text.append(p)
i += 1
return triggered_text

View File

@@ -29,6 +29,7 @@ class import_url_list(Importer):
data, data,
flash, flash,
datastore, datastore,
processor=None
): ):
urls = data.split("\n") urls = data.split("\n")
@@ -51,8 +52,13 @@ class import_url_list(Importer):
# Flask wtform validators wont work with basic auth, use validators package # Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server # Up to 5000 per batch so we dont flood the server
if len(url) and validators.url(url.replace('source:', '')) and good < 5000: # @todo validators.url failed on local hostnames (such as referring to ourself when using browserless)
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False) if len(url) and 'http' in url.lower() and good < 5000:
extras = None
if processor:
extras = {'processor': processor}
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False, extras=extras)
if new_uuid: if new_uuid:
# Straight into the queue. # Straight into the queue.
self.new_uuids.append(new_uuid) self.new_uuids.append(new_uuid)

View File

@@ -15,32 +15,34 @@ class model(dict):
'headers': { 'headers': {
}, },
'requests': { 'requests': {
'timeout': int(getenv("DEFAULT_SETTINGS_REQUESTS_TIMEOUT", "45")), # Default 45 seconds 'extra_proxies': [], # Configurable extra proxies via the UI
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'jitter_seconds': 0, 'jitter_seconds': 0,
'proxy': None, # Preferred proxy connection
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'timeout': int(getenv("DEFAULT_SETTINGS_REQUESTS_TIMEOUT", "45")), # Default 45 seconds
'workers': int(getenv("DEFAULT_SETTINGS_REQUESTS_WORKERS", "10")), # Number of threads, lower is better for slow connections 'workers': int(getenv("DEFAULT_SETTINGS_REQUESTS_WORKERS", "10")), # Number of threads, lower is better for slow connections
'proxy': None # Preferred proxy connection
}, },
'application': { 'application': {
# Custom notification content
'api_access_token_enabled': True, 'api_access_token_enabled': True,
'password': False,
'base_url' : None, 'base_url' : None,
'extract_title_as_title': False,
'empty_pages_are_a_change': False, 'empty_pages_are_a_change': False,
'css_dark_mode': False, 'extract_title_as_title': False,
'fetch_backend': getenv("DEFAULT_FETCH_BACKEND", "html_requests"), 'fetch_backend': getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT, 'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT,
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum 'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [], 'global_subtractive_selectors': [],
'ignore_whitespace': True, 'ignore_whitespace': True,
'render_anchor_tag_content': False,
'notification_urls': [], # Apprise URL list
# Custom notification content
'notification_title': default_notification_title,
'notification_body': default_notification_body, 'notification_body': default_notification_body,
'notification_format': default_notification_format, 'notification_format': default_notification_format,
'notification_title': default_notification_title,
'notification_urls': [], # Apprise URL list
'pager_size': 50,
'password': False,
'render_anchor_tag_content': False,
'schema_version' : 0, 'schema_version' : 0,
'webdriver_delay': None # Extra delay in seconds before extracting text 'shared_diff_access': False,
'webdriver_delay': None , # Extra delay in seconds before extracting text
} }
} }
} }
@@ -48,3 +50,15 @@ class model(dict):
def __init__(self, *arg, **kw): def __init__(self, *arg, **kw):
super(model, self).__init__(*arg, **kw) super(model, self).__init__(*arg, **kw)
self.update(self.base_config) self.update(self.base_config)
def parse_headers_from_text_file(filepath):
headers = {}
with open(filepath, 'r') as f:
for l in f.readlines():
l = l.strip()
if not l.startswith('#') and ':' in l:
(k, v) = l.split(':')
headers[k.strip()] = v.strip()
return headers

View File

@@ -1,9 +1,14 @@
from distutils.util import strtobool from distutils.util import strtobool
import logging import logging
import os import os
import re
import time import time
import uuid import uuid
# Allowable protocols, protects against javascript: etc
# file:// is further checked by ALLOW_FILE_URI
SAFE_PROTOCOL_REGEX='^(http|https|ftp|file):'
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60)) minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7} mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
@@ -11,57 +16,80 @@ from changedetectionio.notification import (
default_notification_format_for_watch default_notification_format_for_watch
) )
base_config = {
'body': None,
'check_unique_lines': False, # On change-detected, compare against all history if its something new
'check_count': 0,
'date_created': None,
'consecutive_filter_failures': 0, # Every time the CSS/xPath filter cannot be located, reset when all is fine.
'extract_text': [], # Extract text by regex after filters
'extract_title_as_title': False,
'fetch_backend': 'system', # plaintext, playwright etc
'processor': 'text_json_diff', # could be restock_diff or others from .processors
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
'filter_text_added': True,
'filter_text_replaced': True,
'filter_text_removed': True,
'has_ldjson_price_data': None,
'track_ldjson_price_data': None,
'headers': {}, # Extra headers to send
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
'in_stock_only' : True, # Only trigger change on going to instock from out-of-stock
'include_filters': [],
'last_checked': 0,
'last_error': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'method': 'GET',
# Custom notification content
'notification_body': None,
'notification_format': default_notification_format_for_watch,
'notification_muted': False,
'notification_title': None,
'notification_screenshot': False, # Include the latest screenshot if available and supported by the apprise URL
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'paused': False,
'previous_md5': False,
'previous_md5_before_filters': False, # Used for skipping changedetection entirely
'proxy': None, # Preferred proxy connection
'subtractive_selectors': [],
'tag': None,
'text_should_not_be_present': [], # Text that should not present
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'title': None,
'trigger_text': [], # List of text or regex to wait for until a change is detected
'url': '',
'uuid': str(uuid.uuid4()),
'webdriver_delay': None,
'webdriver_js_execute_code': None, # Run before change-detection
}
def is_safe_url(test_url):
# See https://github.com/dgtlmoon/changedetection.io/issues/1358
# Remove 'source:' prefix so we dont get 'source:javascript:' etc
# 'source:' is a valid way to tell us to return the source
r = re.compile(re.escape('source:'), re.IGNORECASE)
test_url = r.sub('', test_url)
pattern = re.compile(os.getenv('SAFE_PROTOCOL_REGEX', SAFE_PROTOCOL_REGEX), re.IGNORECASE)
if not pattern.match(test_url.strip()):
return False
return True
class model(dict): class model(dict):
__newest_history_key = None __newest_history_key = None
__history_n=0 __history_n = 0
__base_config = {
#'history': {}, # Dict of timestamp and output stripped filename (removed)
#'newest_history_key': 0, (removed, taken from history.txt index)
'body': None,
'check_unique_lines': False, # On change-detected, compare against all history if its something new
'check_count': 0,
'consecutive_filter_failures': 0, # Every time the CSS/xPath filter cannot be located, reset when all is fine.
'extract_text': [], # Extract text by regex after filters
'extract_title_as_title': False,
'fetch_backend': None,
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
'headers': {}, # Extra headers to send
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
'include_filters': [],
'last_checked': 0,
'last_error': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'method': 'GET',
# Custom notification content
'notification_body': None,
'notification_format': default_notification_format_for_watch,
'notification_muted': False,
'notification_title': None,
'notification_screenshot': False, # Include the latest screenshot if available and supported by the apprise URL
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'paused': False,
'previous_md5': False,
'proxy': None, # Preferred proxy connection
'subtractive_selectors': [],
'tag': None,
'text_should_not_be_present': [], # Text that should not present
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'title': None,
'trigger_text': [], # List of text or regex to wait for until a change is detected
'url': None,
'uuid': str(uuid.uuid4()),
'webdriver_delay': None,
'webdriver_js_execute_code': None, # Run before change-detection
}
jitter_seconds = 0 jitter_seconds = 0
def __init__(self, *arg, **kw): def __init__(self, *arg, **kw):
self.update(self.__base_config) self.update(base_config)
self.__datastore_path = kw['datastore_path'] self.__datastore_path = kw['datastore_path']
self['uuid'] = str(uuid.uuid4()) self['uuid'] = str(uuid.uuid4())
@@ -92,7 +120,11 @@ class model(dict):
@property @property
def link(self): def link(self):
url = self.get('url', '') url = self.get('url', '')
if not is_safe_url(url):
return 'DISABLED'
ready_url = url ready_url = url
if '{%' in url or '{{' in url: if '{%' in url or '{{' in url:
from jinja2 import Environment from jinja2 import Environment
@@ -111,6 +143,26 @@ class model(dict):
return ready_url return ready_url
@property
def get_fetch_backend(self):
"""
Like just using the `fetch_backend` key but there could be some logic
:return:
"""
# Maybe also if is_image etc?
# This is because chrome/playwright wont render the PDF in the browser and we will just fetch it and use pdf2html to see the text.
if self.is_pdf:
return 'html_requests'
return self.get('fetch_backend')
@property
def is_pdf(self):
# content_type field is set in the future
# https://github.com/dgtlmoon/changedetection.io/issues/1392
# Not sure the best logic here
return self.get('url', '').lower().endswith('.pdf') or 'pdf' in self.get('content_type', '').lower()
@property @property
def label(self): def label(self):
# Used for sorting # Used for sorting
@@ -193,9 +245,32 @@ class model(dict):
bump = self.history bump = self.history
return self.__newest_history_key return self.__newest_history_key
def get_history_snapshot(self, timestamp):
import brotli
filepath = self.history[timestamp]
# See if a brotli versions exists and switch to that
if not filepath.endswith('.br') and os.path.isfile(f"{filepath}.br"):
filepath = f"{filepath}.br"
# OR in the backup case that the .br does not exist, but the plain one does
if filepath.endswith('.br') and not os.path.isfile(filepath):
if os.path.isfile(filepath.replace('.br', '')):
filepath = filepath.replace('.br', '')
if filepath.endswith('.br'):
# Brotli doesnt have a fileheader to detect it, so we rely on filename
# https://www.rfc-editor.org/rfc/rfc7932
with open(filepath, 'rb') as f:
return(brotli.decompress(f.read()).decode('utf-8'))
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
return f.read()
# Save some text file to the appropriate path and bump the history # Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run() # result_obj from fetch_site_status.run()
def save_history_text(self, contents, timestamp): def save_history_text(self, contents, timestamp, snapshot_id):
import brotli
self.ensure_data_dir_exists() self.ensure_data_dir_exists()
@@ -204,13 +279,21 @@ class model(dict):
if self.__newest_history_key and int(timestamp) == int(self.__newest_history_key): if self.__newest_history_key and int(timestamp) == int(self.__newest_history_key):
time.sleep(timestamp - self.__newest_history_key) time.sleep(timestamp - self.__newest_history_key)
snapshot_fname = "{}.txt".format(str(uuid.uuid4())) threshold = int(os.getenv('SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD', 1024))
skip_brotli = strtobool(os.getenv('DISABLE_BROTLI_TEXT_SNAPSHOT', 'False'))
# in /diff/ and /preview/ we are going to assume for now that it's UTF-8 when reading if not skip_brotli and len(contents) > threshold:
# most sites are utf-8 and some are even broken utf-8 snapshot_fname = f"{snapshot_id}.txt.br"
with open(os.path.join(self.watch_data_dir, snapshot_fname), 'wb') as f: dest = os.path.join(self.watch_data_dir, snapshot_fname)
f.write(contents) if not os.path.exists(dest):
f.close() with open(dest, 'wb') as f:
f.write(brotli.compress(contents, mode=brotli.MODE_TEXT))
else:
snapshot_fname = f"{snapshot_id}.txt"
dest = os.path.join(self.watch_data_dir, snapshot_fname)
if not os.path.exists(dest):
with open(dest, 'wb') as f:
f.write(contents)
# Append to index # Append to index
# @todo check last char was \n # @todo check last char was \n
@@ -247,7 +330,8 @@ class model(dict):
# Compare each lines (set) against each history text file (set) looking for something new.. # Compare each lines (set) against each history text file (set) looking for something new..
existing_history = set({}) existing_history = set({})
for k, v in self.history.items(): for k, v in self.history.items():
alist = set([line.decode('utf-8').strip().lower() for line in open(v, 'rb')]) content = self.get_history_snapshot(k)
alist = set([line.strip().lower() for line in content.splitlines()])
existing_history = existing_history.union(alist) existing_history = existing_history.union(alist)
# Check that everything in local_lines(new stuff) already exists in existing_history - it should # Check that everything in local_lines(new stuff) already exists in existing_history - it should
@@ -262,17 +346,6 @@ class model(dict):
# False is not an option for AppRise, must be type None # False is not an option for AppRise, must be type None
return None return None
def get_screenshot_as_jpeg(self):
# Created by save_screenshot()
fname = os.path.join(self.watch_data_dir, "last-screenshot.jpg")
if os.path.isfile(fname):
return fname
# False is not an option for AppRise, must be type None
return None
def __get_file_ctime(self, filename): def __get_file_ctime(self, filename):
fname = os.path.join(self.watch_data_dir, filename) fname = os.path.join(self.watch_data_dir, filename)
if os.path.isfile(fname): if os.path.isfile(fname):
@@ -319,6 +392,25 @@ class model(dict):
return fname return fname
return False return False
def pause(self):
self['paused'] = True
def unpause(self):
self['paused'] = False
def toggle_pause(self):
self['paused'] ^= True
def mute(self):
self['notification_muted'] = True
def unmute(self):
self['notification_muted'] = False
def toggle_mute(self):
self['notification_muted'] ^= True
def extract_regex_from_all_history(self, regex): def extract_regex_from_all_history(self, regex):
import csv import csv
import re import re
@@ -330,8 +422,8 @@ class model(dict):
# self.history will be keyed with the full path # self.history will be keyed with the full path
for k, fname in self.history.items(): for k, fname in self.history.items():
if os.path.isfile(fname): if os.path.isfile(fname):
with open(fname, "r") as f: if True:
contents = f.read() contents = self.get_history_snapshot(k)
res = re.findall(regex, contents, re.MULTILINE) res = re.findall(regex, contents, re.MULTILINE)
if res: if res:
if not csv_writer: if not csv_writer:
@@ -362,3 +454,77 @@ class model(dict):
f.close() f.close()
return csv_output_filename return csv_output_filename
@property
# Return list of tags, stripped and lowercase, used for searching
def all_tags(self):
return [s.strip().lower() for s in self.get('tag','').split(',')]
def has_special_diff_filter_options_set(self):
# All False - nothing would be done, so act like it's not processable
if not self.get('filter_text_added', True) and not self.get('filter_text_replaced', True) and not self.get('filter_text_removed', True):
return False
# Or one is set
if not self.get('filter_text_added', True) or not self.get('filter_text_replaced', True) or not self.get('filter_text_removed', True):
return True
# None is set
return False
@property
def has_extra_headers_file(self):
if os.path.isfile(os.path.join(self.watch_data_dir, 'headers.txt')):
return True
for f in self.all_tags:
fname = "headers-"+re.sub(r'[\W_]', '', f).lower().strip() + ".txt"
filepath = os.path.join(self.__datastore_path, fname)
if os.path.isfile(filepath):
return True
return False
def get_all_headers(self):
from .App import parse_headers_from_text_file
headers = self.get('headers', {}).copy()
# Available headers on the disk could 'headers.txt' in the watch data dir
filepath = os.path.join(self.watch_data_dir, 'headers.txt')
try:
if os.path.isfile(filepath):
headers.update(parse_headers_from_text_file(filepath))
except Exception as e:
print(f"ERROR reading headers.txt at {filepath}", str(e))
# Or each by tag, as tagname.txt in the main datadir
for f in self.all_tags:
fname = "headers-"+re.sub(r'[\W_]', '', f).lower().strip() + ".txt"
filepath = os.path.join(self.__datastore_path, fname)
try:
if os.path.isfile(filepath):
headers.update(parse_headers_from_text_file(filepath))
except Exception as e:
print(f"ERROR reading headers.txt at {filepath}", str(e))
return headers
def get_last_fetched_before_filters(self):
import brotli
filepath = os.path.join(self.watch_data_dir, 'last-fetched.br')
if not os.path.isfile(filepath):
# If a previous attempt doesnt yet exist, just snarf the previous snapshot instead
dates = list(self.history.keys())
if len(dates):
return self.get_history_snapshot(dates[-1])
else:
return ''
with open(filepath, 'rb') as f:
return(brotli.decompress(f.read()).decode('utf-8'))
def save_last_fetched_before_filters(self, contents):
import brotli
filepath = os.path.join(self.watch_data_dir, 'last-fetched.br')
with open(filepath, 'wb') as f:
f.write(brotli.compress(contents, mode=brotli.MODE_TEXT))

View File

@@ -5,15 +5,18 @@ import json
valid_tokens = { valid_tokens = {
'base_url': '', 'base_url': '',
'watch_url': '', 'current_snapshot': '',
'watch_uuid': '',
'watch_title': '',
'watch_tag': '',
'diff': '', 'diff': '',
'diff_added': '',
'diff_full': '', 'diff_full': '',
'diff_removed': '',
'diff_url': '', 'diff_url': '',
'preview_url': '', 'preview_url': '',
'current_snapshot': '' 'triggered_text': '',
'watch_tag': '',
'watch_title': '',
'watch_url': '',
'watch_uuid': '',
} }
default_notification_format_for_watch = 'System default' default_notification_format_for_watch = 'System default'
@@ -86,7 +89,7 @@ def process_notification(n_object, datastore):
n_body = jinja2_env.from_string(n_object.get('notification_body', default_notification_body)).render(**notification_parameters) n_body = jinja2_env.from_string(n_object.get('notification_body', default_notification_body)).render(**notification_parameters)
n_title = jinja2_env.from_string(n_object.get('notification_title', default_notification_title)).render(**notification_parameters) n_title = jinja2_env.from_string(n_object.get('notification_title', default_notification_title)).render(**notification_parameters)
n_format = valid_notification_formats.get( n_format = valid_notification_formats.get(
n_object['notification_format'], n_object.get('notification_format', default_notification_format),
valid_notification_formats[default_notification_format], valid_notification_formats[default_notification_format],
) )
@@ -120,10 +123,10 @@ def process_notification(n_object, datastore):
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png' url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
if url.startswith('tgram://'): if url.startswith('tgram://'):
# Telegram only supports a limit subset of HTML, remove the '<br/>' we place in. # Telegram only supports a limit subset of HTML, remove the '<br>' we place in.
# re https://github.com/dgtlmoon/changedetection.io/issues/555 # re https://github.com/dgtlmoon/changedetection.io/issues/555
# @todo re-use an existing library we have already imported to strip all non-allowed tags # @todo re-use an existing library we have already imported to strip all non-allowed tags
n_body = n_body.replace('<br/>', '\n') n_body = n_body.replace('<br>', '\n')
n_body = n_body.replace('</br>', '\n') n_body = n_body.replace('</br>', '\n')
# real limit is 4096, but minus some for extra metadata # real limit is 4096, but minus some for extra metadata
payload_max_size = 3600 payload_max_size = 3600
@@ -209,15 +212,18 @@ def create_notification_parameters(n_object, datastore):
tokens.update( tokens.update(
{ {
'base_url': base_url if base_url is not None else '', 'base_url': base_url if base_url is not None else '',
'current_snapshot': n_object['current_snapshot'] if 'current_snapshot' in n_object else '',
'diff': n_object.get('diff', ''), # Null default in the case we use a test
'diff_added': n_object.get('diff_added', ''), # Null default in the case we use a test
'diff_full': n_object.get('diff_full', ''), # Null default in the case we use a test
'diff_removed': n_object.get('diff_removed', ''), # Null default in the case we use a test
'diff_url': diff_url,
'preview_url': preview_url,
'triggered_text': n_object.get('triggered_text', ''),
'watch_tag': watch_tag if watch_tag is not None else '',
'watch_title': watch_title if watch_title is not None else '',
'watch_url': watch_url, 'watch_url': watch_url,
'watch_uuid': uuid, 'watch_uuid': uuid,
'watch_title': watch_title if watch_title is not None else '',
'watch_tag': watch_tag if watch_tag is not None else '',
'diff_url': diff_url,
'diff': n_object.get('diff', ''), # Null default in the case we use a test
'diff_full': n_object.get('diff_full', ''), # Null default in the case we use a test
'preview_url': preview_url,
'current_snapshot': n_object['current_snapshot'] if 'current_snapshot' in n_object else ''
}) })
return tokens return tokens

View File

@@ -0,0 +1,11 @@
# Change detection post-processors
The concept here is to be able to switch between different domain specific problems to solve.
- `text_json_diff` The traditional text and JSON comparison handler
- `restock_diff` Only cares about detecting if a product looks like it has some text that suggests that it's out of stock, otherwise assumes that it's in stock.
Some suggestions for the future
- `graphical`
- `restock_and_price` - extract price AND stock text

View File

@@ -0,0 +1,24 @@
from abc import abstractmethod
import hashlib
class difference_detection_processor():
def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
@abstractmethod
def run(self, uuid, skip_when_checksum_same=True):
update_obj = {'last_notification_error': False, 'last_error': False}
some_data = 'xxxxx'
update_obj["previous_md5"] = hashlib.md5(some_data.encode('utf-8')).hexdigest()
changed_detected = False
return changed_detected, update_obj, ''.encode('utf-8')
def available_processors():
from . import restock_diff, text_json_diff
x=[('text_json_diff', text_json_diff.name), ('restock_diff', restock_diff.name)]
# @todo Make this smarter with introspection of sorts.
return x

View File

@@ -0,0 +1,132 @@
import hashlib
import os
import re
import urllib3
from . import difference_detection_processor
from changedetectionio import content_fetcher
from copy import deepcopy
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
name = 'Re-stock detection for single product pages'
description = 'Detects if the product goes back to in-stock'
class UnableToExtractRestockData(Exception):
def __init__(self, status_code):
# Set this so we can use it in other parts of the app
self.status_code = status_code
return
class perform_site_check(difference_detection_processor):
screenshot = None
xpath_data = None
def __init__(self, *args, datastore, **kwargs):
super().__init__(*args, **kwargs)
self.datastore = datastore
def run(self, uuid, skip_when_checksum_same=True):
# DeepCopy so we can be sure we don't accidently change anything by reference
watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch:
raise Exception("Watch no longer exists.")
# Protect against file:// access
if re.search(r'^file', watch.get('url', ''), re.IGNORECASE) and not os.getenv('ALLOW_FILE_URI', False):
raise Exception(
"file:// type access is denied for security reasons."
)
# Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False}
extra_headers = watch.get('headers', [])
# Tweak the base config with the per-watch ones
request_headers = deepcopy(self.datastore.data['settings']['headers'])
request_headers.update(extra_headers)
# https://github.com/psf/requests/issues/4525
# Requests doesnt yet support brotli encoding, so don't put 'br' here, be totally sure that the user cannot
# do this by accident.
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
timeout = self.datastore.data['settings']['requests'].get('timeout')
url = watch.link
request_body = self.datastore.data['watching'][uuid].get('body')
request_method = self.datastore.data['watching'][uuid].get('method')
ignore_status_codes = self.datastore.data['watching'][uuid].get('ignore_status_codes', False)
# Pluggable content fetcher
prefer_backend = watch.get_fetch_backend
if not prefer_backend or prefer_backend == 'system':
prefer_backend = self.datastore.data['settings']['application']['fetch_backend']
if hasattr(content_fetcher, prefer_backend):
klass = getattr(content_fetcher, prefer_backend)
else:
# If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests")
proxy_id = self.datastore.get_preferred_proxy_for_watch(uuid=uuid)
proxy_url = None
if proxy_id:
proxy_url = self.datastore.proxy_list.get(proxy_id).get('url')
print("UUID {} Using proxy {}".format(uuid, proxy_url))
fetcher = klass(proxy_override=proxy_url)
# Configurable per-watch or global extra delay before extracting text (for webDriver types)
system_webdriver_delay = self.datastore.data['settings']['application'].get('webdriver_delay', None)
if watch['webdriver_delay'] is not None:
fetcher.render_extract_delay = watch.get('webdriver_delay')
elif system_webdriver_delay is not None:
fetcher.render_extract_delay = system_webdriver_delay
# Could be removed if requests/plaintext could also return some info?
if prefer_backend != 'html_webdriver':
raise Exception("Re-stock detection requires Chrome or compatible webdriver/playwright fetcher to work")
if watch.get('webdriver_js_execute_code') is not None and watch.get('webdriver_js_execute_code').strip():
fetcher.webdriver_js_execute_code = watch.get('webdriver_js_execute_code')
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_codes, watch.get('include_filters'))
fetcher.quit()
self.screenshot = fetcher.screenshot
self.xpath_data = fetcher.xpath_data
# Track the content type
update_obj['content_type'] = fetcher.headers.get('Content-Type', '')
update_obj["last_check_status"] = fetcher.get_last_status_code()
# Main detection method
fetched_md5 = None
if fetcher.instock_data:
fetched_md5 = hashlib.md5(fetcher.instock_data.encode('utf-8')).hexdigest()
# 'Possibly in stock' comes from stock-not-in-stock.js when no string found above the fold.
update_obj["in_stock"] = True if fetcher.instock_data == 'Possibly in stock' else False
else:
raise UnableToExtractRestockData(status_code=fetcher.status_code)
# The main thing that all this at the moment comes down to :)
changed_detected = False
if watch.get('previous_md5') and watch.get('previous_md5') != fetched_md5:
# Yes if we only care about it going to instock, AND we are in stock
if watch.get('in_stock_only') and update_obj["in_stock"]:
changed_detected = True
if not watch.get('in_stock_only'):
# All cases
changed_detected = True
# Always record the new checksum
update_obj["previous_md5"] = fetched_md5
return changed_detected, update_obj, fetcher.instock_data.encode('utf-8')

View File

@@ -1,23 +1,35 @@
# HTML to TEXT/JSON DIFFERENCE FETCHER
import hashlib import hashlib
import json
import logging import logging
import os import os
import re import re
import time
import urllib3 import urllib3
from changedetectionio import content_fetcher, html_tools from changedetectionio import content_fetcher, html_tools
from changedetectionio.blueprint.price_data_follower import PRICE_DATA_TRACK_ACCEPT, PRICE_DATA_TRACK_REJECT
from copy import deepcopy
from . import difference_detection_processor
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
name = 'Webpage Text/HTML, JSON and PDF changes'
description = 'Detects all text changes where possible'
class FilterNotFoundInResponse(ValueError): class FilterNotFoundInResponse(ValueError):
def __init__(self, msg): def __init__(self, msg):
ValueError.__init__(self, msg) ValueError.__init__(self, msg)
class PDFToHTMLToolNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Some common stuff here that can be moved to a base class # Some common stuff here that can be moved to a base class
# (set_proxy_from_list) # (set_proxy_from_list)
class perform_site_check(): class perform_site_check(difference_detection_processor):
screenshot = None screenshot = None
xpath_data = None xpath_data = None
@@ -38,8 +50,7 @@ class perform_site_check():
return regex return regex
def run(self, uuid): def run(self, uuid, skip_when_checksum_same=True):
from copy import deepcopy
changed_detected = False changed_detected = False
screenshot = False # as bytes screenshot = False # as bytes
stripped_text_from_html = "" stripped_text_from_html = ""
@@ -48,7 +59,7 @@ class perform_site_check():
watch = deepcopy(self.datastore.data['watching'].get(uuid)) watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch: if not watch:
return raise Exception("Watch no longer exists.")
# Protect against file:// access # Protect against file:// access
if re.search(r'^file', watch.get('url', ''), re.IGNORECASE) and not os.getenv('ALLOW_FILE_URI', False): if re.search(r'^file', watch.get('url', ''), re.IGNORECASE) and not os.getenv('ALLOW_FILE_URI', False):
@@ -59,10 +70,9 @@ class perform_site_check():
# Unset any existing notification error # Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False} update_obj = {'last_notification_error': False, 'last_error': False}
extra_headers = watch.get('headers', [])
# Tweak the base config with the per-watch ones # Tweak the base config with the per-watch ones
request_headers = deepcopy(self.datastore.data['settings']['headers']) extra_headers = watch.get_all_headers()
request_headers = self.datastore.get_all_headers()
request_headers.update(extra_headers) request_headers.update(extra_headers)
# https://github.com/psf/requests/issues/4525 # https://github.com/psf/requests/issues/4525
@@ -86,7 +96,10 @@ class perform_site_check():
is_source = True is_source = True
# Pluggable content fetcher # Pluggable content fetcher
prefer_backend = watch.get('fetch_backend') prefer_backend = watch.get_fetch_backend
if not prefer_backend or prefer_backend == 'system':
prefer_backend = self.datastore.data['settings']['application']['fetch_backend']
if hasattr(content_fetcher, prefer_backend): if hasattr(content_fetcher, prefer_backend):
klass = getattr(content_fetcher, prefer_backend) klass = getattr(content_fetcher, prefer_backend)
else: else:
@@ -116,12 +129,26 @@ class perform_site_check():
if watch.get('webdriver_js_execute_code') is not None and watch.get('webdriver_js_execute_code').strip(): if watch.get('webdriver_js_execute_code') is not None and watch.get('webdriver_js_execute_code').strip():
fetcher.webdriver_js_execute_code = watch.get('webdriver_js_execute_code') fetcher.webdriver_js_execute_code = watch.get('webdriver_js_execute_code')
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_codes, watch.get('include_filters')) # requests for PDF's, images etc should be passwd the is_binary flag
is_binary = watch.is_pdf
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_codes, watch.get('include_filters'), is_binary=is_binary)
fetcher.quit() fetcher.quit()
self.screenshot = fetcher.screenshot self.screenshot = fetcher.screenshot
self.xpath_data = fetcher.xpath_data self.xpath_data = fetcher.xpath_data
# Track the content type
update_obj['content_type'] = fetcher.get_all_headers().get('content-type', '').lower()
# Watches added automatically in the queue manager will skip if its the same checksum as the previous run
# Saves a lot of CPU
update_obj['previous_md5_before_filters'] = hashlib.md5(fetcher.content.encode('utf-8')).hexdigest()
if skip_when_checksum_same:
if update_obj['previous_md5_before_filters'] == watch.get('previous_md5_before_filters'):
raise content_fetcher.checksumFromPreviousCheckWasTheSame()
# Fetching complete, now filters # Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base? # @todo move to class / maybe inside of fetcher abstract base?
@@ -132,7 +159,7 @@ class perform_site_check():
# https://stackoverflow.com/questions/41817578/basic-method-chaining ? # https://stackoverflow.com/questions/41817578/basic-method-chaining ?
# return content().textfilter().jsonextract().checksumcompare() ? # return content().textfilter().jsonextract().checksumcompare() ?
is_json = 'application/json' in fetcher.headers.get('Content-Type', '') is_json = 'application/json' in fetcher.get_all_headers().get('content-type', '').lower()
is_html = not is_json is_html = not is_json
# source: support, basically treat it as plaintext # source: support, basically treat it as plaintext
@@ -140,7 +167,32 @@ class perform_site_check():
is_html = False is_html = False
is_json = False is_json = False
include_filters_rule = watch.get('include_filters', []) if watch.is_pdf or 'application/pdf' in fetcher.get_all_headers().get('content-type', '').lower():
from shutil import which
tool = os.getenv("PDF_TO_HTML_TOOL", "pdftohtml")
if not which(tool):
raise PDFToHTMLToolNotFound("Command-line `{}` tool was not found in system PATH, was it installed?".format(tool))
import subprocess
proc = subprocess.Popen(
[tool, '-stdout', '-', '-s', 'out.pdf', '-i'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
proc.stdin.write(fetcher.raw_content)
proc.stdin.close()
fetcher.content = proc.stdout.read().decode('utf-8')
proc.wait(timeout=60)
# Add a little metadata so we know if the file changes (like if an image changes, but the text is the same
# @todo may cause problems with non-UTF8?
metadata = "<p>Added by changedetection.io: Document checksum - {} Filesize - {} bytes</p>".format(
hashlib.md5(fetcher.raw_content).hexdigest().upper(),
len(fetcher.content))
fetcher.content = fetcher.content.replace('</body>', metadata + '</body>')
include_filters_rule = deepcopy(watch.get('include_filters', []))
# include_filters_rule = watch['include_filters'] # include_filters_rule = watch['include_filters']
subtractive_selectors = watch.get( subtractive_selectors = watch.get(
"subtractive_selectors", [] "subtractive_selectors", []
@@ -148,6 +200,10 @@ class perform_site_check():
"global_subtractive_selectors", [] "global_subtractive_selectors", []
) )
# Inject a virtual LD+JSON price tracker rule
if watch.get('track_ldjson_price_data', '') == PRICE_DATA_TRACK_ACCEPT:
include_filters_rule.append(html_tools.LD_JSON_PRODUCT_OFFER_SELECTOR)
has_filter_rule = include_filters_rule and len("".join(include_filters_rule).strip()) has_filter_rule = include_filters_rule and len("".join(include_filters_rule).strip())
has_subtractive_selectors = subtractive_selectors and len(subtractive_selectors[0].strip()) has_subtractive_selectors = subtractive_selectors and len(subtractive_selectors[0].strip())
@@ -155,6 +211,14 @@ class perform_site_check():
include_filters_rule.append("json:$") include_filters_rule.append("json:$")
has_filter_rule = True has_filter_rule = True
if is_json:
# Sort the JSON so we dont get false alerts when the content is just re-ordered
try:
fetcher.content = json.dumps(json.loads(fetcher.content), sort_keys=True)
except Exception as e:
# Might have just been a snippet, or otherwise bad JSON, continue
pass
if has_filter_rule: if has_filter_rule:
json_filter_prefixes = ['json:', 'jq:'] json_filter_prefixes = ['json:', 'jq:']
for filter in include_filters_rule: for filter in include_filters_rule:
@@ -162,6 +226,8 @@ class perform_site_check():
stripped_text_from_html += html_tools.extract_json_as_string(content=fetcher.content, json_filter=filter) stripped_text_from_html += html_tools.extract_json_as_string(content=fetcher.content, json_filter=filter)
is_html = False is_html = False
if is_html or is_source: if is_html or is_source:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text # CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
@@ -169,13 +235,17 @@ class perform_site_check():
html_content = fetcher.content html_content = fetcher.content
# If not JSON, and if it's not text/plain.. # If not JSON, and if it's not text/plain..
if 'text/plain' in fetcher.headers.get('Content-Type', '').lower(): if 'text/plain' in fetcher.get_all_headers().get('content-type', '').lower():
# Don't run get_text or xpath/css filters on plaintext # Don't run get_text or xpath/css filters on plaintext
stripped_text_from_html = html_content stripped_text_from_html = html_content
else: else:
# Does it have some ld+json price data? used for easier monitoring
update_obj['has_ldjson_price_data'] = html_tools.has_ldjson_product_info(fetcher.content)
# Then we assume HTML # Then we assume HTML
if has_filter_rule: if has_filter_rule:
html_content = "" html_content = ""
for filter_rule in include_filters_rule: for filter_rule in include_filters_rule:
# For HTML/XML we offer xpath as an option, just start a regular xPath "/.." # For HTML/XML we offer xpath as an option, just start a regular xPath "/.."
if filter_rule[0] == '/' or filter_rule.startswith('xpath:'): if filter_rule[0] == '/' or filter_rule.startswith('xpath:'):
@@ -208,6 +278,34 @@ class perform_site_check():
# Re #340 - return the content before the 'ignore text' was applied # Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8') text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# @todo whitespace coming from missing rtrim()?
# stripped_text_from_html could be based on their preferences, replace the processed text with only that which they want to know about.
# Rewrite's the processing text based on only what diff result they want to see
if watch.has_special_diff_filter_options_set() and len(watch.history.keys()):
# Now the content comes from the diff-parser and not the returned HTTP traffic, so could be some differences
from .. import diff
# needs to not include (added) etc or it may get used twice
# Replace the processed text with the preferred result
rendered_diff = diff.render_diff(previous_version_file_contents=watch.get_last_fetched_before_filters(),
newest_version_file_contents=stripped_text_from_html,
include_equal=False, # not the same lines
include_added=watch.get('filter_text_added', True),
include_removed=watch.get('filter_text_removed', True),
include_replaced=watch.get('filter_text_replaced', True),
line_feed_sep="\n",
include_change_type_prefix=False)
watch.save_last_fetched_before_filters(text_content_before_ignored_filter)
if not rendered_diff and stripped_text_from_html:
# We had some content, but no differences were found
# Store our new file as the MD5 so it will trigger in the future
c = hashlib.md5(text_content_before_ignored_filter.translate(None, b'\r\n\t ')).hexdigest()
return False, {'previous_md5': c}, stripped_text_from_html.encode('utf-8')
else:
stripped_text_from_html = rendered_diff
# Treat pages with no renderable text content as a change? No by default # Treat pages with no renderable text content as a change? No by default
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False) empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0: if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0:
@@ -266,6 +364,7 @@ class perform_site_check():
blocked = True blocked = True
# Filter and trigger works the same, so reuse it # Filter and trigger works the same, so reuse it
# It should return the line numbers that match # It should return the line numbers that match
# Unblock flow if the trigger was found (some text remained after stripped what didnt match)
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html), result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=trigger_text, wordlist=trigger_text,
mode="line numbers") mode="line numbers")

View File

@@ -0,0 +1,10 @@
from dataclasses import dataclass, field
from typing import Any
# So that we can queue some metadata in `item`
# https://docs.python.org/3/library/queue.html#queue.PriorityQueue
#
@dataclass(order=True)
class PrioritizedItem:
priority: int
item: Any=field(compare=False)

View File

@@ -0,0 +1,183 @@
module.exports = async ({page, context}) => {
var {
url,
execute_js,
user_agent,
extra_wait_ms,
req_headers,
include_filters,
xpath_element_js,
screenshot_quality,
proxy_username,
proxy_password,
disk_cache_dir,
no_cache_list,
block_url_list,
} = context;
await page.setBypassCSP(true)
await page.setExtraHTTPHeaders(req_headers);
await page.setUserAgent(user_agent);
// https://ourcodeworld.com/articles/read/1106/how-to-solve-puppeteer-timeouterror-navigation-timeout-of-30000-ms-exceeded
await page.setDefaultNavigationTimeout(0);
if (proxy_username) {
await page.authenticate({
username: proxy_username,
password: proxy_password
});
}
await page.setViewport({
width: 1024,
height: 768,
deviceScaleFactor: 1,
});
await page.setRequestInterception(true);
if (disk_cache_dir) {
console.log(">>>>>>>>>>>>>>> LOCAL DISK CACHE ENABLED <<<<<<<<<<<<<<<<<<<<<");
}
const fs = require('fs');
const crypto = require('crypto');
function file_is_expired(file_path) {
if (!fs.existsSync(file_path)) {
return true;
}
var stats = fs.statSync(file_path);
const now_date = new Date();
const expire_seconds = 300;
if ((now_date / 1000) - (stats.mtime.getTime() / 1000) > expire_seconds) {
console.log("CACHE EXPIRED: " + file_path);
return true;
}
return false;
}
page.on('request', async (request) => {
// General blocking of requests that waste traffic
if (block_url_list.some(substring => request.url().toLowerCase().includes(substring))) return request.abort();
if (disk_cache_dir) {
const url = request.url();
const key = crypto.createHash('md5').update(url).digest("hex");
const dir_path = disk_cache_dir + key.slice(0, 1) + '/' + key.slice(1, 2) + '/' + key.slice(2, 3) + '/';
// https://stackoverflow.com/questions/4482686/check-synchronously-if-file-directory-exists-in-node-js
if (fs.existsSync(dir_path + key)) {
console.log("* CACHE HIT , using - " + dir_path + key + " - " + url);
const cached_data = fs.readFileSync(dir_path + key);
// @todo headers can come from dir_path+key+".meta" json file
request.respond({
status: 200,
//contentType: 'text/html', //@todo
body: cached_data
});
return;
}
}
request.continue();
});
if (disk_cache_dir) {
page.on('response', async (response) => {
const url = response.url();
// Basic filtering for sane responses
if (response.request().method() != 'GET' || response.request().resourceType() == 'xhr' || response.request().resourceType() == 'document' || response.status() != 200) {
console.log("Skipping (not useful) - Status:" + response.status() + " Method:" + response.request().method() + " ResourceType:" + response.request().resourceType() + " " + url);
return;
}
if (no_cache_list.some(substring => url.toLowerCase().includes(substring))) {
console.log("Skipping (no_cache_list) - " + url);
return;
}
if (url.toLowerCase().includes('data:')) {
console.log("Skipping (embedded-data) - " + url);
return;
}
response.buffer().then(buffer => {
if (buffer.length > 100) {
console.log("Cache - Saving " + response.request().method() + " - " + url + " - " + response.request().resourceType());
const key = crypto.createHash('md5').update(url).digest("hex");
const dir_path = disk_cache_dir + key.slice(0, 1) + '/' + key.slice(1, 2) + '/' + key.slice(2, 3) + '/';
if (!fs.existsSync(dir_path)) {
fs.mkdirSync(dir_path, {recursive: true})
}
if (fs.existsSync(dir_path + key)) {
if (file_is_expired(dir_path + key)) {
fs.writeFileSync(dir_path + key, buffer);
}
} else {
fs.writeFileSync(dir_path + key, buffer);
}
}
});
});
}
const r = await page.goto(url, {
waitUntil: 'load'
});
await page.waitForTimeout(1000);
await page.waitForTimeout(extra_wait_ms);
if (execute_js) {
await page.evaluate(execute_js);
await page.waitForTimeout(200);
}
var xpath_data;
var instock_data;
try {
// Not sure the best way here, in the future this should be a new package added to npm then run in browserless
// (Once the old playwright is removed)
xpath_data = await page.evaluate((include_filters) => {%xpath_scrape_code%}, include_filters);
instock_data = await page.evaluate(() => {%instock_scrape_code%});
} catch (e) {
console.log(e);
}
// Protocol error (Page.captureScreenshot): Cannot take screenshot with 0 width can come from a proxy auth failure
// Wrap it here (for now)
var b64s = false;
try {
b64s = await page.screenshot({encoding: "base64", fullPage: true, quality: screenshot_quality, type: 'jpeg'});
} catch (e) {
console.log(e);
}
// May fail on very large pages with 'WARNING: tile memory limits exceeded, some content may not draw'
if (!b64s) {
// @todo after text extract, we can place some overlay text with red background to say 'croppped'
console.error('ERROR: content-fetcher page was maybe too large for a screenshot, reverting to viewport only screenshot');
try {
b64s = await page.screenshot({encoding: "base64", quality: screenshot_quality, type: 'jpeg'});
} catch (e) {
console.log(e);
}
}
var html = await page.content();
return {
data: {
'content': html,
'headers': r.headers(),
'instock_data': instock_data,
'screenshot': b64s,
'status_code': r.status(),
'xpath_data': xpath_data
},
type: 'application/json',
};
};

View File

@@ -0,0 +1,102 @@
function isItemInStock() {
// @todo Pass these in so the same list can be used in non-JS fetchers
const outOfStockTexts = [
'0 in stock',
'agotado',
'artikel zurzeit vergriffen',
'as soon as stock is available',
'available for back order',
'backordered',
'brak na stanie',
'brak w magazynie',
'coming soon',
'currently have any tickets for this',
'currently unavailable',
'en rupture de stock',
'item is no longer available',
'message if back in stock',
'nachricht bei',
'nicht auf lager',
'nicht lieferbar',
'nicht zur verfügung',
'no disponible temporalmente',
'no longer in stock',
'no tickets available',
'not available',
'not currently available',
'not in stock',
'notify me when available',
'não estamos a aceitar encomendas',
'out of stock',
'out-of-stock',
'produkt niedostępny',
'sold out',
'temporarily out of stock',
'temporarily unavailable',
'tickets unavailable',
'unavailable tickets',
'we do not currently have an estimate of when this product will be back in stock.',
'zur zeit nicht an lager',
];
const negateOutOfStockRegexs = [
'[0-9] in stock'
]
var negateOutOfStockRegexs_r = [];
for (let i = 0; i < negateOutOfStockRegexs.length; i++) {
negateOutOfStockRegexs_r.push(new RegExp(negateOutOfStockRegexs[0], 'g'));
}
const elementsWithZeroChildren = Array.from(document.getElementsByTagName('*')).filter(element => element.children.length === 0);
// REGEXS THAT REALLY MEAN IT'S IN STOCK
for (let i = elementsWithZeroChildren.length - 1; i >= 0; i--) {
const element = elementsWithZeroChildren[i];
if (element.offsetWidth > 0 || element.offsetHeight > 0 || element.getClientRects().length > 0) {
var elementText="";
if (element.tagName.toLowerCase() === "input") {
elementText = element.value.toLowerCase();
} else {
elementText = element.textContent.toLowerCase();
}
if (elementText.length) {
// try which ones could mean its in stock
for (let i = 0; i < negateOutOfStockRegexs.length; i++) {
if (negateOutOfStockRegexs_r[i].test(elementText)) {
return 'Possibly in stock';
}
}
}
}
}
// OTHER STUFF THAT COULD BE THAT IT'S OUT OF STOCK
for (let i = elementsWithZeroChildren.length - 1; i >= 0; i--) {
const element = elementsWithZeroChildren[i];
if (element.offsetWidth > 0 || element.offsetHeight > 0 || element.getClientRects().length > 0) {
var elementText="";
if (element.tagName.toLowerCase() === "input") {
elementText = element.value.toLowerCase();
} else {
elementText = element.textContent.toLowerCase();
}
if (elementText.length) {
// and these mean its out of stock
for (const outOfStockText of outOfStockTexts) {
if (elementText.includes(outOfStockText)) {
return elementText; // item is out of stock
}
}
}
}
}
return 'Possibly in stock'; // possibly in stock, cant decide otherwise.
}
// returns the element text that makes it think it's out of stock
return isItemInStock();

View File

@@ -1,3 +1,6 @@
// Copyright (C) 2021 Leigh Morresi (dgtlmoon@gmail.com)
// All rights reserved.
// @file Scrape the page looking for elements of concern (%ELEMENTS%) // @file Scrape the page looking for elements of concern (%ELEMENTS%)
// http://matatk.agrip.org.uk/tests/position-and-width/ // http://matatk.agrip.org.uk/tests/position-and-width/
// https://stackoverflow.com/questions/26813480/when-is-element-getboundingclientrect-guaranteed-to-be-updated-accurate // https://stackoverflow.com/questions/26813480/when-is-element-getboundingclientrect-guaranteed-to-be-updated-accurate
@@ -5,8 +8,14 @@
// Some pages like https://www.londonstockexchange.com/stock/NCCL/ncondezi-energy-limited/analysis // Some pages like https://www.londonstockexchange.com/stock/NCCL/ncondezi-energy-limited/analysis
// will automatically force a scroll somewhere, so include the position offset // will automatically force a scroll somewhere, so include the position offset
// Lets hope the position doesnt change while we iterate the bbox's, but this is better than nothing // Lets hope the position doesnt change while we iterate the bbox's, but this is better than nothing
var scroll_y = 0;
try {
scroll_y = +document.documentElement.scrollTop || document.body.scrollTop
} catch (e) {
console.log(e);
}
var scroll_y=+document.documentElement.scrollTop || document.body.scrollTop
// Include the getXpath script directly, easier than fetching // Include the getXpath script directly, easier than fetching
function getxpath(e) { function getxpath(e) {
@@ -35,15 +44,15 @@ const findUpTag = (el) => {
if (el.name !== undefined && el.name.length) { if (el.name !== undefined && el.name.length) {
var proposed = el.tagName + "[name=" + el.name + "]"; var proposed = el.tagName + "[name=" + el.name + "]";
var proposed_element = window.document.querySelectorAll(proposed); var proposed_element = window.document.querySelectorAll(proposed);
if(proposed_element.length) { if (proposed_element.length) {
if (proposed_element.length === 1) { if (proposed_element.length === 1) {
return proposed; return proposed;
} else { } else {
// Some sites change ID but name= stays the same, we can hit it if we know the index // Some sites change ID but name= stays the same, we can hit it if we know the index
// Find all the elements that match and work out the input[n] // Find all the elements that match and work out the input[n]
var n=Array.from(proposed_element).indexOf(el); var n = Array.from(proposed_element).indexOf(el);
// Return a Playwright selector for nthinput[name=zipcode] // Return a Playwright selector for nthinput[name=zipcode]
return proposed+" >> nth="+n; return proposed + " >> nth=" + n;
} }
} }
} }
@@ -81,8 +90,16 @@ var bbox;
for (var i = 0; i < elements.length; i++) { for (var i = 0; i < elements.length; i++) {
bbox = elements[i].getBoundingClientRect(); bbox = elements[i].getBoundingClientRect();
// Forget really small ones // Exclude items that are not interactable or visible
if (bbox['width'] < 10 && bbox['height'] < 10) { if(elements[i].style.opacity === "0") {
continue
}
if(elements[i].style.display === "none" || elements[i].style.pointerEvents === "none" ) {
continue
}
// Skip really small ones, and where width or height ==0
if (bbox['width'] * bbox['height'] < 100) {
continue; continue;
} }
@@ -138,7 +155,6 @@ for (var i = 0; i < elements.length; i++) {
} }
// Inject the current one set in the include_filters, which may be a CSS rule // Inject the current one set in the include_filters, which may be a CSS rule
// used for displaying the current one in VisualSelector, where its not one we generated. // used for displaying the current one in VisualSelector, where its not one we generated.
if (include_filters.length) { if (include_filters.length) {
@@ -166,10 +182,23 @@ if (include_filters.length) {
} }
if (q) { if (q) {
bbox = q.getBoundingClientRect(); // #1231 - IN the case XPath attribute filter is applied, we will have to traverse up and find the element.
console.log("xpath_element_scraper: Got filter element, scroll from top was "+scroll_y) if (q.hasOwnProperty('getBoundingClientRect')) {
} else { bbox = q.getBoundingClientRect();
console.log("xpath_element_scraper: filter element "+f+" was not found"); console.log("xpath_element_scraper: Got filter element, scroll from top was " + scroll_y)
} else {
try {
// Try and see we can find its ownerElement
bbox = q.ownerElement.getBoundingClientRect();
console.log("xpath_element_scraper: Got filter by ownerElement element, scroll from top was " + scroll_y)
} catch (e) {
console.log("xpath_element_scraper: error looking up ownerElement")
}
}
}
if(!q) {
console.log("xpath_element_scraper: filter element " + f + " was not found");
} }
if (bbox && bbox['width'] > 0 && bbox['height'] > 0) { if (bbox && bbox['width'] > 0 && bbox['height'] > 0) {
@@ -184,5 +213,9 @@ if (include_filters.length) {
} }
} }
// Sort the elements so we find the smallest one first, in other words, we find the smallest one matching in that area
// so that we dont select the wrapping element by mistake and be unable to select what we want
size_pos.sort((a, b) => (a.width*a.height > b.width*b.height) ? 1 : -1)
// Window.width required for proper scaling in the frontend // Window.width required for proper scaling in the frontend
return {'size_pos': size_pos, 'browser_width': window.innerWidth}; return {'size_pos': size_pos, 'browser_width': window.innerWidth};

View File

@@ -1,104 +0,0 @@
#!/bin/bash
# live_server will throw errors even with live_server_scope=function if I have the live_server setup in different functions
# and I like to restart the server for each test (and have the test cleanup after each test)
# merge request welcome :)
# exit when any command fails
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
find tests/test_*py -type f|while read test_name
do
echo "TEST RUNNING $test_name"
pytest $test_name
done
echo "RUNNING WITH BASE_URL SET"
# Now re-run some tests with BASE_URL enabled
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py
# Re-run with HIDE_REFERER set - could affect login
export HIDE_REFERER=True
pytest tests/test_access_control.py
# Now for the selenium and playwright/browserless fetchers
# Note - this is not UI functional tests - just checking that each one can fetch the content
echo "TESTING WEBDRIVER FETCH > SELENIUM/WEBDRIVER..."
docker run -d --name $$-test_selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
# takes a while to spin up
sleep 5
export WEBDRIVER_URL=http://localhost:4444/wd/hub
pytest tests/fetchers/test_content.py
pytest tests/test_errorhandling.py
unset WEBDRIVER_URL
docker kill $$-test_selenium
echo "TESTING WEBDRIVER FETCH > PLAYWRIGHT/BROWSERLESS..."
# Not all platforms support playwright (not ARM/rPI), so it's not packaged in requirements.txt
PLAYWRIGHT_VERSION=$(grep -i -E "RUN pip install.+" "$SCRIPT_DIR/../Dockerfile" | grep --only-matching -i -E "playwright[=><~+]+[0-9\.]+")
echo "using $PLAYWRIGHT_VERSION"
pip3 install "$PLAYWRIGHT_VERSION"
docker run -d --name $$-test_browserless -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
# takes a while to spin up
sleep 5
export PLAYWRIGHT_DRIVER_URL=ws://127.0.0.1:3000
pytest tests/fetchers/test_content.py
pytest tests/test_errorhandling.py
pytest tests/visualselector/test_fetch_data.py
unset PLAYWRIGHT_DRIVER_URL
docker kill $$-test_browserless
# Test proxy list handling, starting two squids on different ports
# Each squid adds a different header to the response, which is the main thing we test for.
docker run -d --name $$-squid-one --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf -p 3128:3128 ubuntu/squid:4.13-21.10_edge
docker run -d --name $$-squid-two --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf -p 3129:3128 ubuntu/squid:4.13-21.10_edge
# So, basic HTTP as env var test
export HTTP_PROXY=http://localhost:3128
export HTTPS_PROXY=http://localhost:3128
pytest tests/proxy_list/test_proxy.py
docker logs $$-squid-one 2>/dev/null|grep one.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to one.changedetection.io in the squid logs (while checking env vars HTTP_PROXY/HTTPS_PROXY)"
fi
unset HTTP_PROXY
unset HTTPS_PROXY
# 2nd test actually choose the preferred proxy from proxies.json
cp tests/proxy_list/proxies.json-example ./test-datastore/proxies.json
# Makes a watch use a preferred proxy
pytest tests/proxy_list/test_multiple_proxy.py
# Should be a request in the default "first" squid
docker logs $$-squid-one 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy)"
fi
# And one in the 'second' squid (user selects this as preferred)
docker logs $$-squid-two 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy)"
fi
# @todo - test system override proxy selection and watch defaults, setup a 3rd squid?
docker kill $$-squid-one
docker kill $$-squid-two

View File

@@ -0,0 +1,38 @@
#!/bin/bash
# live_server will throw errors even with live_server_scope=function if I have the live_server setup in different functions
# and I like to restart the server for each test (and have the test cleanup after each test)
# merge request welcome :)
# exit when any command fails
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
find tests/test_*py -type f|while read test_name
do
echo "TEST RUNNING $test_name"
pytest $test_name
done
echo "RUNNING WITH BASE_URL SET"
# Now re-run some tests with BASE_URL enabled
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py
# Re-run with HIDE_REFERER set - could affect login
export HIDE_REFERER=True
pytest tests/test_access_control.py
# Re-run a few tests that will trigger brotli based storage
export SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD=5
pytest tests/test_access_control.py
pytest tests/test_notification.py
pytest tests/test_backend.py
pytest tests/test_rss.py
pytest tests/test_unique_lines.py

View File

@@ -0,0 +1,61 @@
#!/bin/bash
# exit when any command fails
set -e
# Test proxy list handling, starting two squids on different ports
# Each squid adds a different header to the response, which is the main thing we test for.
docker run --network changedet-network -d --name squid-one --hostname squid-one --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
docker run --network changedet-network -d --name squid-two --hostname squid-two --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
# Used for configuring a custom proxy URL via the UI
docker run --network changedet-network -d \
--name squid-custom \
--hostname squid-custom \
--rm \
-v `pwd`/tests/proxy_list/squid-auth.conf:/etc/squid/conf.d/debian.conf \
-v `pwd`/tests/proxy_list/squid-passwords.txt:/etc/squid3/passwords \
ubuntu/squid:4.13-21.10_edge
## 2nd test actually choose the preferred proxy from proxies.json
docker run --network changedet-network \
-v `pwd`/tests/proxy_list/proxies.json-example:/app/changedetectionio/test-datastore/proxies.json \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_multiple_proxy.py'
## Should be a request in the default "first" squid
docker logs squid-one 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid one)"
exit 1
fi
# And one in the 'second' squid (user selects this as preferred)
docker logs squid-two 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
# Test the UI configurable proxies
docker run --network changedet-network \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_select_custom_proxy.py'
# Should see a request for one.changedetection.io in there
docker logs squid-custom 2>/dev/null|grep "TCP_TUNNEL.200.*changedetection.io"
if [ $? -ne 0 ]
then
echo "Did not see a valid request to changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
docker kill squid-one squid-two squid-custom

View File

@@ -0,0 +1,37 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg
fill="#FFFFFF"
height="7.5005589"
width="11.248507"
version="1.1"
id="Layer_1"
viewBox="0 0 7.1975545 4.7993639"
xml:space="preserve"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs19" />
<g
id="g14"
transform="matrix(-0.01406065,0,0,0.01406065,7.1975543,-1.1990922)">
<g
id="g12">
<g
id="g10">
<path
d="M 468.373,85.28 H 45.333 C 21.227,85.28 0,105.76 0,129.014 V 383.2 c 0,23.147 21.227,43.413 45.333,43.413 h 422.933 c 23.68,0 43.627,-19.84 43.627,-43.413 V 129.014 C 512,105.334 492.053,85.28 468.373,85.28 Z m 0,320 H 45.333 c -12.373,0 -24,-10.773 -24,-22.08 V 129.014 c 0,-11.307 11.84,-22.4 24,-22.4 h 422.933 c 11.733,0 22.293,10.667 22.293,22.4 V 383.2 h 0.107 c 10e-4,11.734 -10.453,22.08 -22.293,22.08 z"
id="path2" />
<path
d="m 440.853,153.974 c -3.307,-4.907 -9.92,-6.187 -14.827,-2.987 L 256,264.48 85.973,151.094 c -4.907,-3.2 -11.52,-1.707 -14.72,3.2 -3.093,4.8 -1.813,11.307 2.88,14.507 l 176,117.333 c 3.627,2.347 8.213,2.347 11.84,0 l 176,-117.333 c 4.8,-3.201 6.187,-9.921 2.88,-14.827 z"
id="path4" />
<path
d="m 143.573,257.654 c -0.107,0.107 -0.32,0.213 -0.427,0.32 L 68.48,311.307 c -4.907,3.307 -6.187,9.92 -2.88,14.827 3.307,4.907 9.92,6.187 14.827,2.88 0.107,-0.107 0.32,-0.213 0.427,-0.32 l 74.667,-53.333 c 4.907,-3.307 6.187,-9.92 2.88,-14.827 -3.308,-4.907 -9.921,-6.187 -14.828,-2.88 z"
id="path6" />
<path
d="m 443.947,311.627 c -0.107,-0.107 -0.32,-0.213 -0.427,-0.32 l -74.667,-53.333 c -4.693,-3.52 -11.413,-2.56 -14.933,2.133 -3.52,4.693 -2.56,11.413 2.133,14.933 0.107,0.107 0.32,0.213 0.427,0.32 l 74.667,53.333 c 4.693,3.52 11.413,2.56 14.933,-2.133 3.52,-4.693 2.56,-11.413 -2.133,-14.933 z"
id="path8" />
</g>
</g>
</g>
</svg>

After

Width:  |  Height:  |  Size: 1.9 KiB

View File

@@ -0,0 +1,3 @@
<?xml version="1.0" encoding="UTF-8"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg width="61.649mm" height="61.649mm" version="1.1" viewBox="0 0 61.649 61.649" xml:space="preserve" xmlns="http://www.w3.org/2000/svg"><g transform="translate(66.269 -15.463)" fill="#3056d3"><g transform="matrix(1.423 0 0 1.423 101.16 69.23)" fill="#3056d3"><g transform="matrix(.8229 0 0 .8229 -23.378 -2.3935)" fill="#3056d3"><path d="m-88.248-43.007a26.323 26.323 0 0 0-26.323 26.323 26.323 26.323 0 0 0 26.323 26.323 26.323 26.323 0 0 0 26.323-26.323 26.323 26.323 0 0 0-26.323-26.323zm0 2.8417a23.482 23.482 0 0 1 23.482 23.482 23.482 23.482 0 0 1-23.482 23.482 23.482 23.482 0 0 1-23.482-23.482 23.482 23.482 0 0 1 23.482-23.482z"/><g transform="matrix(.26458 0 0 .26458 -115.65 -44.085)"><path d="m33.02 64.43c0.35-0.05 2.04-0.13 2.04-0.13h25.53s3.17 0.32 3.67 0.53c2.5 1.05 3.98 1.89 6.04 3.57 0.72 0.58 4.12 4.01 4.12 4.01l51.67 57.39s1.61 1.65 1.97 1.94c1.2 0.97 2.48 1.96 3.98 2.32 0.5 0.12 2.72 0.21 2.72 0.21h27.32l-8.83-9.04s-1.31-1.65-1.44-1.94c-0.45-0.93-0.59-2.59-0.13-3.51 0.35-0.69 1.46-1.87 2.23-1.98 1.03-0.14 2.12-0.39 3.02 0.14 0.33 0.2 1.64 1.32 1.64 1.32l17.49 17.49s1.35 1.09 1.6 1.6c0.17 0.34 0.29 0.82 0.15 1.18-0.17 0.42-1.42 1.63-1.42 1.63l-0.94 0.98-15.69 16.37s-1.44 1.4-1.79 1.67c-0.76 0.6-1.99 0.89-2.96 0.9-1.03 0-2.62-1.11-3.26-1.91-0.6-0.76-1.1-2.22-0.77-3.13 0.16-0.45 1.28-1.85 1.28-1.85l11.36-11.3-29.47-0.02-1.68 0.09s-4.16-0.66-5.26-1.03c-1.63-0.56-3.44-1.82-4.75-2.93-0.39-0.33-1.8-1.92-1.8-1.92l-51.7-59.28s-2-2.06-2.43-2.43c-1.37-1.17-2-1.62-3.76-2.34-0.44-0.18-3.45-0.55-3.45-0.55l-24.13-0.22s-2.23-0.15-2.61-0.22c-1.08-0.21-2.16-1.07-2.81-1.83-0.79-0.92-0.59-3.06 0.06-4.09 0.57-0.89 2.14-1.52 3.19-1.66z"/><path d="m86.1 109.7-17.13 19.65s-2 2.06-2.43 2.43c-1.37 1.17-2 1.62-3.76 2.34-0.44 0.18-3.45 0.55-3.45 0.55l-24.13 0.22s-2.23 0.15-2.61 0.22c-1.08 0.21-2.16 1.07-2.81 1.83-0.79 0.92-0.59 3.06 0.06 4.09 0.57 0.89 2.14 1.52 3.19 1.66 0.35 0.05 2.04 0.13 2.04 0.13h25.53s3.17-0.32 3.67-0.53c2.5-1.05 3.98-1.89 6.04-3.57 0.72-0.58 4.12-4.01 4.12-4.01l17.38-19.3z"/><path d="m177.81 67.6c-0.17-0.42-1.42-1.63-1.42-1.63l-0.94-0.98-15.69-16.37s-1.44-1.4-1.79-1.67c-0.76-0.6-1.99-0.89-2.96-0.9-1.03 0-2.62 1.11-3.26 1.91-0.6 0.76-1.1 2.22-0.77 3.13 0.16 0.45 1.28 1.85 1.28 1.85l11.36 11.3-29.47 0.02-1.68-0.09s-4.16 0.66-5.26 1.03c-1.63 0.56-3.44 1.82-4.75 2.93-0.39 0.33-1.8 1.92-1.8 1.92l-18.91 21.69 5.98 5.98 18.38-20.41s1.61-1.65 1.97-1.94c1.2-0.97 2.48-1.96 3.98-2.32 0.5-0.12 2.72-0.21 2.72-0.21h27.32l-8.83 9.04s-1.31 1.65-1.44 1.94c-0.45 0.93-0.59 2.59-0.13 3.51 0.35 0.69 1.46 1.87 2.23 1.98 1.03 0.14 2.12 0.39 3.02-0.14 0.33-0.2 1.64-1.32 1.64-1.32l17.49-17.49s1.35-1.09 1.6-1.6c0.17-0.34 0.29-0.82 0.15-1.18z"/></g></g></g></g></svg>

After

Width:  |  Height:  |  Size: 2.7 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="http://www.w3.org/2000/svg" width="75.320129mm" height="92.604164mm" viewBox="0 0 75.320129 92.604164">
<g transform="translate(53.548057 -183.975276) scale(1.4843)">
<path fill="#ff2116" d="M-29.632812 123.94727c-3.551967 0-6.44336 2.89347-6.44336 6.44531v49.49804c0 3.55185 2.891393 6.44532 6.44336 6.44532H8.2167969c3.5519661 0 6.4433591-2.89335 6.4433591-6.44532v-40.70117s.101353-1.19181-.416015-2.35156c-.484969-1.08711-1.275391-1.84375-1.275391-1.84375a1.0584391 1.0584391 0 0 0-.0059-.008l-9.3906254-9.21094a1.0584391 1.0584391 0 0 0-.015625-.0156s-.8017392-.76344-1.9902344-1.27344c-1.39939552-.6005-2.8417968-.53711-2.8417968-.53711l.021484-.002z" color="#000" font-family="sans-serif" overflow="visible" paint-order="markers fill stroke" style="line-height:normal;font-variant-ligatures:normal;font-variant-position:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-alternates:normal;font-feature-settings:normal;text-indent:0;text-align:start;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;text-transform:none;text-orientation:mixed;white-space:normal;shape-padding:0;isolation:auto;mix-blend-mode:normal;solid-color:#000000;solid-opacity:1"/>
<path fill="#f5f5f5" d="M-29.632812 126.06445h28.3789058a1.0584391 1.0584391 0 0 0 .021484 0s1.13480448.011 1.96484378.36719c.79889772.34282 1.36536982.86176 1.36914062.86524.0000125.00001.00391.004.00391.004l9.3671868 9.18945s.564354.59582.837891 1.20899c.220779.49491.234375 1.40039.234375 1.40039a1.0584391 1.0584391 0 0 0-.002.0449v40.74609c0 2.41592-1.910258 4.32813-4.3261717 4.32813H-29.632812c-2.415914 0-4.326172-1.91209-4.326172-4.32813v-49.49804c0-2.41603 1.910258-4.32813 4.326172-4.32813z" color="#000" font-family="sans-serif" overflow="visible" paint-order="markers fill stroke" style="line-height:normal;font-variant-ligatures:normal;font-variant-position:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-alternates:normal;font-feature-settings:normal;text-indent:0;text-align:start;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;text-transform:none;text-orientation:mixed;white-space:normal;shape-padding:0;isolation:auto;mix-blend-mode:normal;solid-color:#000000;solid-opacity:1"/>
<path fill="#ff2116" d="M-23.40766 161.09299c-1.45669-1.45669.11934-3.45839 4.39648-5.58397l2.69124-1.33743 1.04845-2.29399c.57665-1.26169 1.43729-3.32036 1.91254-4.5748l.8641-2.28082-.59546-1.68793c-.73217-2.07547-.99326-5.19438-.52872-6.31588.62923-1.51909 2.69029-1.36323 3.50626.26515.63727 1.27176.57212 3.57488-.18329 6.47946l-.6193 2.38125.5455.92604c.30003.50932 1.1764 1.71867 1.9475 2.68743l1.44924 1.80272 1.8033728-.23533c5.72900399-.74758 7.6912472.523 7.6912472 2.34476 0 2.29921-4.4984914 2.48899-8.2760865-.16423-.8499666-.59698-1.4336605-1.19001-1.4336605-1.19001s-2.3665326.48178-3.531704.79583c-1.202707.32417-1.80274.52719-3.564509 1.12186 0 0-.61814.89767-1.02094 1.55026-1.49858 2.4279-3.24833 4.43998-4.49793 5.1723-1.3991.81993-2.86584.87582-3.60433.13733zm2.28605-.81668c.81883-.50607 2.47616-2.46625 3.62341-4.28553l.46449-.73658-2.11497 1.06339c-3.26655 1.64239-4.76093 3.19033-3.98386 4.12664.43653.52598.95874.48237 2.01093-.16792zm21.21809-5.95578c.80089-.56097.68463-1.69142-.22082-2.1472-.70466-.35471-1.2726074-.42759-3.1031574-.40057-1.1249.0767-2.9337647.3034-3.2403347.37237 0 0 .993716.68678 1.434896.93922.58731.33544 2.0145161.95811 3.0565161 1.27706 1.02785.31461 1.6224.28144 2.0729-.0409zm-8.53152-3.54594c-.4847-.50952-1.30889-1.57296-1.83152-2.3632-.68353-.89643-1.02629-1.52887-1.02629-1.52887s-.4996 1.60694-.90948 2.57394l-1.27876 3.16076-.37075.71695s1.971043-.64627 2.97389-.90822c1.0621668-.27744 3.21787-.70134 3.21787-.70134zm-2.74938-11.02573c.12363-1.0375.1761-2.07346-.15724-2.59587-.9246-1.01077-2.04057-.16787-1.85154 2.23517.0636.8084.26443 2.19033.53292 3.04209l.48817 1.54863.34358-1.16638c.18897-.64151.47882-2.02015.64411-3.06364z"/>
<path fill="#2c2c2c" d="M-20.930423 167.83862h2.364986q1.133514 0 1.840213.2169.706698.20991 1.189489.9446.482795.72769.482795 1.75625 0 .94459-.391832 1.6233-.391833.67871-1.056548.97958-.65772.30087-2.02913.30087h-.818651v3.72941h-1.581322zm1.581322 1.22447v3.33058h.783664q1.049552 0 1.44838-.39184.405826-.39183.405826-1.27345 0-.65772-.265887-1.06355-.265884-.41282-.587747-.50378-.314866-.098-1.000572-.098zm5.50664-1.22447h2.148082q1.560333 0 2.4909318.55276.9375993.55276 1.4133973 1.6443.482791 1.09153.482791 2.42096 0 1.3994-.4338151 2.49793-.4268149 1.09153-1.3154348 1.76324-.8816233.67172-2.5189212.67172h-2.267031zm1.581326 1.26645v7.018h.657715q1.378411 0 2.001144-.9516.6227329-.95858.6227329-2.5539 0-3.5125-2.6238769-3.5125zm6.4722254-1.26645h5.30372941v1.26645H-4.2075842v2.85478h2.9807225v1.26646h-2.9807225v4.16322h-1.5813254z" font-family="Franklin Gothic Medium Cond" letter-spacing="0" style="line-height:125%;-inkscape-font-specification:'Franklin Gothic Medium Cond'" word-spacing="4.26000023"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -0,0 +1,2 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="83.39" height="89.648" enable-background="new 0 0 122.406 122.881" version="1.1" viewBox="0 0 83.39 89.648" xml:space="preserve" xmlns="http://www.w3.org/2000/svg"><g transform="translate(5e-4 -33.234)"><path d="m44.239 42.946-39.111 39.896 34.908 34.91 39.09-39.876-1.149-34.931zm-0.91791 42.273c0.979-0.979 1.507-1.99 1.577-3.027 0.077-1.043-0.248-2.424-0.967-4.135-0.725-1.717-1.348-3.346-1.87-4.885s-0.814-3.014-0.897-4.432c-0.07-1.42 0.134-2.768 0.624-4.045 0.477-1.279 1.348-2.545 2.607-3.804 2.099-2.099 4.535-3.123 7.314-3.065 2.773 0.063 5.457 1.158 8.04 3.294l2.881 3.034c1.946 2.607 2.799 5.33 2.557 8.166-0.235 2.83-1.532 5.426-3.893 7.785l-6.296-6.297c1.291-1.291 2.035-2.531 2.238-3.727 0.191-1.197-0.165-2.252-1.081-3.168-0.821-0.82-1.717-1.195-2.69-1.139-0.967 0.064-1.908 0.547-2.817 1.457-0.922 0.922-1.393 1.914-1.412 2.977s0.306 2.416 0.973 4.064c0.661 1.652 1.24 3.25 1.736 4.801 0.496 1.553 0.782 3.035 0.858 4.445 0.076 1.426-0.127 2.787-0.591 4.104-0.477 1.316-1.336 2.596-2.588 3.848-2.125 2.125-4.522 3.186-7.212 3.18s-5.311-1.063-7.855-3.16l-3.747 3.746-2.964-2.965 3.766-3.764c-2.423-2.996-3.568-5.998-3.447-9.02 0.127-3.014 1.476-5.813 4.045-8.383l6.278 6.277c-1.412 1.412-2.175 2.799-2.277 4.16-0.108 1.367 0.414 2.627 1.571 3.783 0.839 0.84 1.755 1.26 2.741 1.242 0.985-0.017 1.92-0.47 2.798-1.347zm21.127-46.435h17.457c-0.0269 2.2368 0.69936 16.025 0.69936 16.025l0.785 23.858c0.019 0.609-0.221 1.164-0.619 1.564l5e-3 4e-3 -41.236 42.022c-0.82213 0.8378-2.175 0.83-3.004 0l-37.913-37.91c-0.83-0.83-0.83-2.176 0-3.006l41.236-42.021c0.39287-0.42671 1.502-0.53568 1.502-0.53568zm18.011 11.59c-59.392-29.687-29.696-14.843 0 0z"/></g></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -114,11 +114,11 @@ $(document).ready(function () {
e.preventDefault() e.preventDefault()
}); });
// When the mouse moves we know which element it should be above
// mousedown will link that to the UI (select the right action, highlight etc)
$('#browsersteps-selector-canvas').bind('mousedown', function (e) { $('#browsersteps-selector-canvas').bind('mousedown', function (e) {
// https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent // https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent
e.preventDefault() e.preventDefault()
console.log(e);
console.log("current xpath in index is " + current_selected_i);
last_click_xy = {'x': parseInt((1 / x_scale) * e.offsetX), 'y': parseInt((1 / y_scale) * e.offsetY)} last_click_xy = {'x': parseInt((1 / x_scale) * e.offsetX), 'y': parseInt((1 / y_scale) * e.offsetY)}
process_selected(current_selected_i); process_selected(current_selected_i);
current_selected_i = false; current_selected_i = false;
@@ -132,6 +132,7 @@ $(document).ready(function () {
} }
}); });
// Debounce and find the current most 'interesting' element we are hovering above
$('#browsersteps-selector-canvas').bind('mousemove', function (e) { $('#browsersteps-selector-canvas').bind('mousemove', function (e) {
if (!xpath_data) { if (!xpath_data) {
return; return;
@@ -151,41 +152,40 @@ $(document).ready(function () {
current_selected_i = false; current_selected_i = false;
// Reverse order - the most specific one should be deeper/"laster" // Reverse order - the most specific one should be deeper/"laster"
// Basically, find the most 'deepest' // Basically, find the most 'deepest'
//$('#browsersteps-selector-canvas').css('cursor', 'pointer'); var possible_elements = [];
for (var i = xpath_data['size_pos'].length; i !== 0; i--) { xpath_data['size_pos'].forEach(function (item, index) {
// draw all of them? let them choose somehow?
var sel = xpath_data['size_pos'][i - 1];
// If we are in a bounding-box // If we are in a bounding-box
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale if (e.offsetY > item.top * y_scale && e.offsetY < item.top * y_scale + item.height * y_scale
&& &&
e.offsetX > sel.left * y_scale && e.offsetX < sel.left * y_scale + sel.width * y_scale e.offsetX > item.left * y_scale && e.offsetX < item.left * y_scale + item.width * y_scale
) { ) {
// Only highlight these interesting types // There could be many elements here, record them all and then we'll find out which is the most 'useful'
if (1) { // (input, textarea, button, A etc)
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale); if (item.width < xpath_data['browser_width']) {
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale); possible_elements.push(item);
current_selected_i = i - 1;
break;
// find the smallest one at this x,y
// does it mean sort the xpath list by size (w*h) i think so!
} else {
if (include_text_elements[0].checked === true) {
// blue one with background instead?
ctx.fillStyle = 'rgba(0,0,255, 0.1)';
ctx.strokeStyle = 'rgba(0,0,200, 0.7)';
$('#browsersteps-selector-canvas').css('cursor', 'grab');
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
current_selected_i = i - 1;
break;
}
} }
} }
});
// Find the best one
if (possible_elements.length) {
possible_elements.forEach(function (item, index) {
if (["a", "input", "textarea", "button"].includes(item['tagName'])) {
current_selected_i = item;
}
});
if (!current_selected_i) {
current_selected_i = possible_elements[0];
}
sel = xpath_data['size_pos'][current_selected_i];
ctx.strokeRect(current_selected_i.left * x_scale, current_selected_i.top * y_scale, current_selected_i.width * x_scale, current_selected_i.height * y_scale);
ctx.fillRect(current_selected_i.left * x_scale, current_selected_i.top * y_scale, current_selected_i.width * x_scale, current_selected_i.height * y_scale);
} }
}.debounce(10)); }.debounce(10));
}); });
@@ -195,16 +195,16 @@ $(document).ready(function () {
// callback for clicking on an xpath on the canvas // callback for clicking on an xpath on the canvas
function process_selected(xpath_data_index) { function process_selected(selected_in_xpath_list) {
found_something = false; found_something = false;
var first_available = $("ul#browser_steps li.empty").first(); var first_available = $("ul#browser_steps li.empty").first();
if (xpath_data_index !== false) { if (selected_in_xpath_list !== false) {
// Nothing focused, so fill in a new one // Nothing focused, so fill in a new one
// if inpt type button or <button> // if inpt type button or <button>
// from the top, find the next not used one and use it // from the top, find the next not used one and use it
var x = xpath_data['size_pos'][xpath_data_index]; var x = selected_in_xpath_list;
console.log(x); console.log(x);
if (x && first_available.length) { if (x && first_available.length) {
// @todo will it let you click shit that has a layer ontop? probably not. // @todo will it let you click shit that has a layer ontop? probably not.
@@ -214,26 +214,18 @@ $(document).ready(function () {
$('input[placeholder="Value"]', first_available).addClass('ok').click().focus(); $('input[placeholder="Value"]', first_available).addClass('ok').click().focus();
found_something = true; found_something = true;
} else { } else {
if (x['isClickable'] || x['tagName'].startsWith('h') || x['tagName'] === 'a' || x['tagName'] === 'button' || x['tagtype'] === 'submit' || x['tagtype'] === 'checkbox' || x['tagtype'] === 'radio' || x['tagtype'] === 'li') { // There's no good way (that I know) to find if this
// see https://stackoverflow.com/questions/446892/how-to-find-event-listeners-on-a-dom-node-in-javascript-or-in-debugging
// https://codepen.io/azaslavsky/pen/DEJVWv
// So we dont know if its really a clickable element or not :-(
// Assume it is - then we dont fill the pages with unreliable "Click X,Y" selections
// If you switch to "Click X,y" after an element here is setup, it will give the last co-ords anyway
//if (x['isClickable'] || x['tagName'].startsWith('h') || x['tagName'] === 'a' || x['tagName'] === 'button' || x['tagtype'] === 'submit' || x['tagtype'] === 'checkbox' || x['tagtype'] === 'radio' || x['tagtype'] === 'li') {
$('select', first_available).val('Click element').change(); $('select', first_available).val('Click element').change();
$('input[type=text]', first_available).first().val(x['xpath']); $('input[type=text]', first_available).first().val(x['xpath']);
found_something = true; found_something = true;
} //}
}
first_available.xpath_data_index = xpath_data_index;
if (!found_something) {
if (include_text_elements[0].checked === true) {
// Suggest that we use as filter?
// @todo filters should always be in the last steps, nothing non-filter after it
found_something = true;
ctx.strokeStyle = 'rgba(0,0,255, 0.9)';
ctx.fillStyle = 'rgba(0,0,255, 0.1)';
$('select', first_available).val('Extract text and use as filter').change();
$('input[type=text]', first_available).first().val(x['xpath']);
include_text_elements[0].checked = false;
}
} }
} }
} }
@@ -248,7 +240,7 @@ $(document).ready(function () {
function start() { function start() {
console.log("Starting browser-steps UI"); console.log("Starting browser-steps UI");
browsersteps_session_id = Date.now(); browsersteps_session_id = false;
// @todo This setting of the first one should be done at the datalayer but wtforms doesnt wanna play nice // @todo This setting of the first one should be done at the datalayer but wtforms doesnt wanna play nice
$('#browser_steps >li:first-child').removeClass('empty'); $('#browser_steps >li:first-child').removeClass('empty');
set_first_gotosite_disabled(); set_first_gotosite_disabled();
@@ -256,7 +248,7 @@ $(document).ready(function () {
$('.clear,.remove', $('#browser_steps >li:first-child')).hide(); $('.clear,.remove', $('#browser_steps >li:first-child')).hide();
$.ajax({ $.ajax({
type: "GET", type: "GET",
url: browser_steps_sync_url + "&browsersteps_session_id=" + browsersteps_session_id, url: browser_steps_start_url,
statusCode: { statusCode: {
400: function () { 400: function () {
// More than likely the CSRF token was lost when the server restarted // More than likely the CSRF token was lost when the server restarted
@@ -264,12 +256,12 @@ $(document).ready(function () {
} }
} }
}).done(function (data) { }).done(function (data) {
xpath_data = data.xpath_data;
$("#loading-status-text").fadeIn(); $("#loading-status-text").fadeIn();
browsersteps_session_id = data.browsersteps_session_id;
// This should trigger 'Goto site' // This should trigger 'Goto site'
console.log("Got startup response, requesting Goto-Site (first) step fake click"); console.log("Got startup response, requesting Goto-Site (first) step fake click");
$('#browser_steps >li:first-child .apply').click(); $('#browser_steps >li:first-child .apply').click();
browserless_seconds_remaining = data.browser_time_remaining; browserless_seconds_remaining = 500;
set_first_gotosite_disabled(); set_first_gotosite_disabled();
}).fail(function (data) { }).fail(function (data) {
console.log(data); console.log(data);
@@ -430,7 +422,6 @@ $(document).ready(function () {
apply_buttons_disabled = false; apply_buttons_disabled = false;
$("#browsersteps-img").css('opacity', 1); $("#browsersteps-img").css('opacity', 1);
$('ul#browser_steps li .control .apply').css('opacity', 1); $('ul#browser_steps li .control .apply').css('opacity', 1);
browserless_seconds_remaining = data.browser_time_remaining;
$("#loading-status-text").hide(); $("#loading-status-text").hide();
set_first_gotosite_disabled(); set_first_gotosite_disabled();
}).fail(function (data) { }).fail(function (data) {

View File

@@ -26,9 +26,6 @@ $(document).ready(function() {
data = { data = {
window_url : window.location.href, window_url : window.location.href,
notification_urls : $('.notification-urls').val(), notification_urls : $('.notification-urls').val(),
notification_title : $('.notification-title').val(),
notification_body : $('.notification-body').val(),
notification_format : $('.notification-format').val(),
} }
for (key in data) { for (key in data) {
if (!data[key].length) { if (!data[key].length) {

View File

@@ -12,7 +12,7 @@ window.addEventListener('hashchange', function () {
var has_errors = document.querySelectorAll(".messages .error"); var has_errors = document.querySelectorAll(".messages .error");
if (!has_errors.length) { if (!has_errors.length) {
if (document.location.hash == "") { if (document.location.hash == "") {
document.querySelector(".tabs ul li:first-child a").click(); location.replace(document.querySelector(".tabs ul li:first-child a").hash);
} else { } else {
set_active_tab(); set_active_tab();
} }

View File

@@ -3,7 +3,7 @@
* Toggles theme between light and dark mode. * Toggles theme between light and dark mode.
*/ */
$(document).ready(function () { $(document).ready(function () {
const button = document.getElementsByClassName("toggle-theme")[0]; const button = document.getElementById("toggle-light-mode");
button.onclick = () => { button.onclick = () => {
const htmlElement = document.getElementsByTagName("html"); const htmlElement = document.getElementsByTagName("html");
@@ -21,4 +21,33 @@ $(document).ready(function () {
const setCookieValue = (value) => { const setCookieValue = (value) => {
document.cookie = `css_dark_mode=${value};max-age=31536000;path=/` document.cookie = `css_dark_mode=${value};max-age=31536000;path=/`
} }
// Search input box behaviour
const toggle_search = document.getElementById("toggle-search");
const search_q = document.getElementById("search-q");
window.addEventListener('keydown', function (e) {
if (e.altKey == true && e.keyCode == 83)
search_q.classList.toggle('expanded');
search_q.focus();
});
search_q.onkeydown = (e) => {
var key = e.keyCode || e.which;
if (key === 13) {
document.searchForm.submit();
}
};
toggle_search.onclick = () => {
// Could be that they want to search something once text is in there
if (search_q.value.length) {
document.searchForm.submit();
} else {
// If not..
search_q.classList.toggle('expanded');
search_q.focus();
}
};
}); });

View File

@@ -1,4 +1,5 @@
// Horrible proof of concept code :) // Copyright (C) 2021 Leigh Morresi (dgtlmoon@gmail.com)
// All rights reserved.
// yes - this is really a hack, if you are a front-ender and want to help, please get in touch! // yes - this is really a hack, if you are a front-ender and want to help, please get in touch!
$(document).ready(function () { $(document).ready(function () {
@@ -60,7 +61,12 @@ $(document).ready(function () {
function bootstrap_visualselector() { function bootstrap_visualselector() {
if (1) { if (1) {
// bootstrap it, this will trigger everything else // bootstrap it, this will trigger everything else
$("img#selector-background").bind('load', function () { $("img#selector-background").on("error", function () {
$('.fetching-update-notice').html("<strong>Ooops!</strong> The VisualSelector tool needs atleast one fetched page, please unpause the watch and/or wait for the watch to complete fetching and then reload this page.");
$('.fetching-update-notice').css('color','#bb0000');
$('#selector-current-xpath').hide();
$('#clear-selector').hide();
}).bind('load', function () {
console.log("Loaded background..."); console.log("Loaded background...");
c = document.getElementById("selector-canvas"); c = document.getElementById("selector-canvas");
// greyed out fill context // greyed out fill context
@@ -78,10 +84,11 @@ $(document).ready(function () {
}).attr("src", screenshot_url); }).attr("src", screenshot_url);
} }
// Tell visualSelector that the image should update // Tell visualSelector that the image should update
var s = $("img#selector-background").attr('src')+"?"+ new Date().getTime(); var s = $("img#selector-background").attr('src') + "?" + new Date().getTime();
$("img#selector-background").attr('src',s) $("img#selector-background").attr('src', s)
} }
// This is fired once the img src is loaded in bootstrap_visualselector()
function fetch_data() { function fetch_data() {
// Image is ready // Image is ready
$('.fetching-update-notice').html("Fetching element data.."); $('.fetching-update-notice').html("Fetching element data..");
@@ -98,7 +105,8 @@ $(document).ready(function () {
reflow_selector(); reflow_selector();
$('.fetching-update-notice').fadeOut(); $('.fetching-update-notice').fadeOut();
}); });
};
}
function set_scale() { function set_scale() {
@@ -177,9 +185,10 @@ $(document).ready(function () {
// Basically, find the most 'deepest' // Basically, find the most 'deepest'
var found = 0; var found = 0;
ctx.fillStyle = 'rgba(205,0,0,0.35)'; ctx.fillStyle = 'rgba(205,0,0,0.35)';
for (var i = selector_data['size_pos'].length; i !== 0; i--) { // Will be sorted by smallest width*height first
for (var i = 0; i <= selector_data['size_pos'].length; i++) {
// draw all of them? let them choose somehow? // draw all of them? let them choose somehow?
var sel = selector_data['size_pos'][i - 1]; var sel = selector_data['size_pos'][i];
// If we are in a bounding-box // If we are in a bounding-box
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale
&& &&
@@ -195,7 +204,7 @@ $(document).ready(function () {
// no need to keep digging // no need to keep digging
// @todo or, O to go out/up, I to go in // @todo or, O to go out/up, I to go in
// or double click to go up/out the selector? // or double click to go up/out the selector?
current_selected_i = i - 1; current_selected_i = i;
found += 1; found += 1;
break; break;
} }

View File

@@ -1,7 +1,7 @@
$(document).ready(function() { $(document).ready(function () {
function toggle() { function toggle() {
if ($('input[name="fetch_backend"]:checked').val() == 'html_webdriver') { if ($('input[name="fetch_backend"]:checked').val() == 'html_webdriver') {
if(playwright_enabled) { if (playwright_enabled) {
// playwright supports headers, so hide everything else // playwright supports headers, so hide everything else
// See #664 // See #664
$('#requests-override-options #request-method').hide(); $('#requests-override-options #request-method').hide();
@@ -14,9 +14,14 @@ $(document).ready(function() {
$('#requests-override-options').hide(); $('#requests-override-options').hide();
} }
$('#webdriver-override-options').show(); $('#webdriver-override-options').show();
} else if ($('input[name="fetch_backend"]:checked').val() == 'system') {
$('#requests-override-options #request-method').hide();
$('#requests-override-options #request-body').hide();
$('#ignore-status-codes-option').hide();
$('#requests-override-options').hide();
$('#webdriver-override-options').hide();
} else { } else {
$('#requests-override-options').show(); $('#requests-override-options').show();

View File

@@ -0,0 +1,3 @@
node_modules
package-lock.json

View File

@@ -0,0 +1,17 @@
ul#requests-extra_proxies {
list-style: none;
/* tidy up the table to look more "inline" */
li {
> label {
display: none;
}
}
/* each proxy entry is a `table` */
table {
tr {
display: inline;
}
}
}

View File

@@ -0,0 +1,37 @@
.pagination-page-info {
color: #fff;
font-size: 0.85rem;
text-transform: capitalize;
}
.pagination.menu {
> * {
display: inline-block;
}
li {
display: inline-block;
}
a {
padding: 0.65rem;
margin: 3px;
border: none;
background: #444;
border-radius: 2px;
color: var(--color-text-button);
&.disabled {
display: none;
}
&.active {
font-weight: bold;
background: #888;
}
&:hover {
background: #999;
}
}
}

View File

@@ -2,10 +2,12 @@
* -- BASE STYLES -- * -- BASE STYLES --
*/ */
@import "parts/_variables";
@import "parts/_spinners";
@import "parts/_browser-steps";
@import "parts/_arrows"; @import "parts/_arrows";
@import "parts/_browser-steps";
@import "parts/_extra_proxies";
@import "parts/_pagination";
@import "parts/_spinners";
@import "parts/_variables";
body { body {
color: var(--color-text); color: var(--color-text);
@@ -22,6 +24,13 @@ body {
width: 1px; width: 1px;
} }
// Row icons like chrome, pdf, share, etc
.status-icon {
display: inline-block;
height: 1rem;
vertical-align: middle;
}
.pure-table-even { .pure-table-even {
background: var(--color-background); background: var(--color-background);
} }
@@ -45,8 +54,47 @@ a.github-link {
} }
} }
button.toggle-theme { #toggle-light-mode {
width: 4rem; width: 3rem;
.icon-dark {
display: none;
}
&.dark {
.icon-light {
display: none;
}
.icon-dark {
display: block;
}
}
}
#toggle-search {
width: 2rem;
}
#search-q {
opacity: 0;
-webkit-transition: all .9s ease;
-moz-transition: all .9s ease;
transition: all .9s ease;
width: 0;
display: none;
&.expanded {
width: auto;
display: inline-block;
opacity: 1;
}
}
#search-result-info {
color: #fff;
}
button.toggle-button {
vertical-align: middle;
background: transparent; background: transparent;
border: none; border: none;
cursor: pointer; cursor: pointer;
@@ -65,19 +113,7 @@ button.toggle-theme {
display: block; display: block;
} }
.icon-dark {
display: none;
}
&.dark {
.icon-light {
display: none;
}
.icon-dark {
display: block;
}
}
} }
.pure-menu-horizontal { .pure-menu-horizontal {
@@ -233,6 +269,10 @@ body:before {
font-size: 85%; font-size: 85%;
} }
.button-xsmall {
font-size: 70%;
}
.fetch-error { .fetch-error {
padding-top: 1em; padding-top: 1em;
font-size: 80%; font-size: 80%;
@@ -881,6 +921,21 @@ body.full-width {
font-size: .875em; font-size: .875em;
} }
} }
.text-filtering {
h3 {
margin-top: 0;
}
border: 1px solid #ccc;
padding: 1rem;
border-radius: 5px;
margin-bottom: 1rem;
fieldset:last-of-type {
padding-bottom: 0;
.pure-control-group {
padding-bottom: 0;
}
}
}
} }
ul { ul {
@@ -1009,3 +1064,57 @@ ul {
border-radius: 5px; border-radius: 5px;
color: var(--color-warning); color: var(--color-warning);
} }
/* automatic price following helpers */
.tracking-ldjson-price-data {
background-color: var(--color-background-button-green);
color: #000;
padding: 3px;
border-radius: 3px;
white-space: nowrap;
}
.ldjson-price-track-offer {
a.pure-button {
border-radius: 3px;
padding: 3px;
background-color: var(--color-background-button-green);
}
font-weight: bold;
font-style: italic;
}
.price-follow-tag-icon {
display: inline-block;
height: 0.8rem;
vertical-align: middle;
}
#quick-watch-processor-type {
color: #fff;
ul {
padding: 0.3rem;
li {
list-style: none;
font-size: 0.8rem;
}
}
}
.restock-label {
&.in-stock {
background-color: var(--color-background-button-green);
color: #fff;
}
&.not-in-stock {
background-color: var(--color-background-button-cancel);
color: #777;
}
padding: 3px;
border-radius: 3px;
white-space: nowrap;
}

View File

@@ -1,6 +1,165 @@
/* /*
* -- BASE STYLES -- * -- BASE STYLES --
*/ */
.arrow {
border: solid #1b98f8;
border-width: 0 2px 2px 0;
display: inline-block;
padding: 3px; }
.arrow.right {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
.arrow.left {
transform: rotate(135deg);
-webkit-transform: rotate(135deg); }
.arrow.up, .arrow.asc {
transform: rotate(-135deg);
-webkit-transform: rotate(-135deg); }
.arrow.down, .arrow.desc {
transform: rotate(45deg);
-webkit-transform: rotate(45deg); }
#browser_steps {
/* convert rows to horizontal cells */ }
#browser_steps th {
display: none; }
#browser_steps li {
list-style: decimal;
padding: 5px; }
#browser_steps li:not(:first-child):hover {
opacity: 1.0; }
#browser_steps li .control {
padding-left: 5px;
padding-right: 5px; }
#browser_steps li .control a {
font-size: 70%; }
#browser_steps li.empty {
padding: 0px;
opacity: 0.35; }
#browser_steps li.empty .control {
display: none; }
#browser_steps li:hover {
background: #eee; }
#browser_steps li > label {
display: none; }
#browser-steps-fieldlist {
height: 100%;
overflow-y: scroll; }
#browser-steps .flex-wrapper {
display: flex;
flex-flow: row;
height: 600px;
/*@todo make this dynamic */ }
/* this is duplicate :( */
#browsersteps-selector-wrapper {
height: 100%;
width: 100%;
overflow-y: scroll;
position: relative;
/* nice tall skinny one */ }
#browsersteps-selector-wrapper > img {
position: absolute;
max-width: 100%; }
#browsersteps-selector-wrapper > canvas {
position: relative;
max-width: 100%; }
#browsersteps-selector-wrapper > canvas:hover {
cursor: pointer; }
#browsersteps-selector-wrapper .loader {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
margin-left: -40px;
z-index: 100;
max-width: 350px;
text-align: center; }
#browsersteps-selector-wrapper .spinner, #browsersteps-selector-wrapper .spinner:after {
width: 80px;
height: 80px;
font-size: 3px; }
#browsersteps-selector-wrapper #browsersteps-click-start {
color: var(--color-grey-400); }
#browsersteps-selector-wrapper #browsersteps-click-start:hover {
cursor: pointer; }
ul#requests-extra_proxies {
list-style: none;
/* tidy up the table to look more "inline" */
/* each proxy entry is a `table` */ }
ul#requests-extra_proxies li > label {
display: none; }
ul#requests-extra_proxies table tr {
display: inline; }
.pagination-page-info {
color: #fff;
font-size: 0.85rem;
text-transform: capitalize; }
.pagination.menu > * {
display: inline-block; }
.pagination.menu li {
display: inline-block; }
.pagination.menu a {
padding: 0.65rem;
margin: 3px;
border: none;
background: #444;
border-radius: 2px;
color: var(--color-text-button); }
.pagination.menu a.disabled {
display: none; }
.pagination.menu a.active {
font-weight: bold;
background: #888; }
.pagination.menu a:hover {
background: #999; }
/* spinner */
.spinner,
.spinner:after {
border-radius: 50%;
width: 10px;
height: 10px; }
.spinner {
margin: 0px auto;
font-size: 3px;
vertical-align: middle;
display: inline-block;
text-indent: -9999em;
border-top: 1.1em solid rgba(38, 104, 237, 0.2);
border-right: 1.1em solid rgba(38, 104, 237, 0.2);
border-bottom: 1.1em solid rgba(38, 104, 237, 0.2);
border-left: 1.1em solid #2668ed;
-webkit-transform: translateZ(0);
-ms-transform: translateZ(0);
transform: translateZ(0);
-webkit-animation: load8 1.1s infinite linear;
animation: load8 1.1s infinite linear; }
@-webkit-keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
@keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
/** /**
* CSS custom properties (aka variables). * CSS custom properties (aka variables).
*/ */
@@ -138,130 +297,6 @@ html[data-darkmode="true"] {
html[data-darkmode="true"] .watch-table .unviewed.error { html[data-darkmode="true"] .watch-table .unviewed.error {
color: var(--color-watch-table-error); } color: var(--color-watch-table-error); }
/* spinner */
.spinner,
.spinner:after {
border-radius: 50%;
width: 10px;
height: 10px; }
.spinner {
margin: 0px auto;
font-size: 3px;
vertical-align: middle;
display: inline-block;
text-indent: -9999em;
border-top: 1.1em solid rgba(38, 104, 237, 0.2);
border-right: 1.1em solid rgba(38, 104, 237, 0.2);
border-bottom: 1.1em solid rgba(38, 104, 237, 0.2);
border-left: 1.1em solid #2668ed;
-webkit-transform: translateZ(0);
-ms-transform: translateZ(0);
transform: translateZ(0);
-webkit-animation: load8 1.1s infinite linear;
animation: load8 1.1s infinite linear; }
@-webkit-keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
@keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg); }
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg); } }
#browser_steps {
/* convert rows to horizontal cells */ }
#browser_steps th {
display: none; }
#browser_steps li {
list-style: decimal;
padding: 5px; }
#browser_steps li:not(:first-child):hover {
opacity: 1.0; }
#browser_steps li .control {
padding-left: 5px;
padding-right: 5px; }
#browser_steps li .control a {
font-size: 70%; }
#browser_steps li.empty {
padding: 0px;
opacity: 0.35; }
#browser_steps li.empty .control {
display: none; }
#browser_steps li:hover {
background: #eee; }
#browser_steps li > label {
display: none; }
#browser-steps-fieldlist {
height: 100%;
overflow-y: scroll; }
#browser-steps .flex-wrapper {
display: flex;
flex-flow: row;
height: 600px;
/*@todo make this dynamic */ }
/* this is duplicate :( */
#browsersteps-selector-wrapper {
height: 100%;
width: 100%;
overflow-y: scroll;
position: relative;
/* nice tall skinny one */ }
#browsersteps-selector-wrapper > img {
position: absolute;
max-width: 100%; }
#browsersteps-selector-wrapper > canvas {
position: relative;
max-width: 100%; }
#browsersteps-selector-wrapper > canvas:hover {
cursor: pointer; }
#browsersteps-selector-wrapper .loader {
position: absolute;
left: 50%;
top: 50%;
transform: translate(-50%, -50%);
margin-left: -40px;
z-index: 100;
max-width: 350px;
text-align: center; }
#browsersteps-selector-wrapper .spinner, #browsersteps-selector-wrapper .spinner:after {
width: 80px;
height: 80px;
font-size: 3px; }
#browsersteps-selector-wrapper #browsersteps-click-start {
color: var(--color-grey-400); }
#browsersteps-selector-wrapper #browsersteps-click-start:hover {
cursor: pointer; }
.arrow {
border: solid #1b98f8;
border-width: 0 2px 2px 0;
display: inline-block;
padding: 3px; }
.arrow.right {
transform: rotate(-45deg);
-webkit-transform: rotate(-45deg); }
.arrow.left {
transform: rotate(135deg);
-webkit-transform: rotate(135deg); }
.arrow.up, .arrow.asc {
transform: rotate(-135deg);
-webkit-transform: rotate(-135deg); }
.arrow.down, .arrow.desc {
transform: rotate(45deg);
-webkit-transform: rotate(45deg); }
body { body {
color: var(--color-text); color: var(--color-text);
background: var(--color-background-page); } background: var(--color-background-page); }
@@ -275,6 +310,11 @@ body {
white-space: nowrap; white-space: nowrap;
width: 1px; } width: 1px; }
.status-icon {
display: inline-block;
height: 1rem;
vertical-align: middle; }
.pure-table-even { .pure-table-even {
background: var(--color-background); } background: var(--color-background); }
@@ -291,23 +331,44 @@ a.github-link {
a.github-link:hover { a.github-link:hover {
color: var(--color-icon-github-hover); } color: var(--color-icon-github-hover); }
button.toggle-theme { #toggle-light-mode {
width: 4rem; width: 3rem; }
#toggle-light-mode .icon-dark {
display: none; }
#toggle-light-mode.dark .icon-light {
display: none; }
#toggle-light-mode.dark .icon-dark {
display: block; }
#toggle-search {
width: 2rem; }
#search-q {
opacity: 0;
-webkit-transition: all .9s ease;
-moz-transition: all .9s ease;
transition: all .9s ease;
width: 0;
display: none; }
#search-q.expanded {
width: auto;
display: inline-block;
opacity: 1; }
#search-result-info {
color: #fff; }
button.toggle-button {
vertical-align: middle;
background: transparent; background: transparent;
border: none; border: none;
cursor: pointer; cursor: pointer;
color: var(--color-icon-github); } color: var(--color-icon-github); }
button.toggle-theme:hover { button.toggle-button:hover {
color: var(--color-icon-github-hover); } color: var(--color-icon-github-hover); }
button.toggle-theme svg { button.toggle-button svg {
fill: currentColor; } fill: currentColor; }
button.toggle-theme .icon-light { button.toggle-button .icon-light {
display: block; }
button.toggle-theme .icon-dark {
display: none; }
button.toggle-theme.dark .icon-light {
display: none; }
button.toggle-theme.dark .icon-dark {
display: block; } display: block; }
.pure-menu-horizontal { .pure-menu-horizontal {
@@ -418,6 +479,9 @@ body:before {
.button-small { .button-small {
font-size: 85%; } font-size: 85%; }
.button-xsmall {
font-size: 70%; }
.fetch-error { .fetch-error {
padding-top: 1em; padding-top: 1em;
font-size: 80%; font-size: 80%;
@@ -855,6 +919,17 @@ body.full-width .edit-form {
color: var(--color-text-input-description); } color: var(--color-text-input-description); }
.edit-form .pure-form-message-inline code { .edit-form .pure-form-message-inline code {
font-size: .875em; } font-size: .875em; }
.edit-form .text-filtering {
border: 1px solid #ccc;
padding: 1rem;
border-radius: 5px;
margin-bottom: 1rem; }
.edit-form .text-filtering h3 {
margin-top: 0; }
.edit-form .text-filtering fieldset:last-of-type {
padding-bottom: 0; }
.edit-form .text-filtering fieldset:last-of-type .pure-control-group {
padding-bottom: 0; }
ul { ul {
padding-left: 1em; padding-left: 1em;
@@ -945,3 +1020,43 @@ ul {
display: inline; display: inline;
height: 26px; height: 26px;
vertical-align: middle; } vertical-align: middle; }
/* automatic price following helpers */
.tracking-ldjson-price-data {
background-color: var(--color-background-button-green);
color: #000;
padding: 3px;
border-radius: 3px;
white-space: nowrap; }
.ldjson-price-track-offer {
font-weight: bold;
font-style: italic; }
.ldjson-price-track-offer a.pure-button {
border-radius: 3px;
padding: 3px;
background-color: var(--color-background-button-green); }
.price-follow-tag-icon {
display: inline-block;
height: 0.8rem;
vertical-align: middle; }
#quick-watch-processor-type {
color: #fff; }
#quick-watch-processor-type ul {
padding: 0.3rem; }
#quick-watch-processor-type ul li {
list-style: none;
font-size: 0.8rem; }
.restock-label {
padding: 3px;
border-radius: 3px;
white-space: nowrap; }
.restock-label.in-stock {
background-color: var(--color-background-button-green);
color: #fff; }
.restock-label.not-in-stock {
background-color: var(--color-background-button-cancel);
color: #777; }

View File

@@ -1,20 +1,20 @@
from flask import ( from flask import (
flash flash
) )
from . model import App, Watch
from copy import deepcopy, copy
from os import path, unlink
from threading import Lock
import json import json
import logging import logging
import os import os
import threading
import time
import uuid as uuid_builder
from copy import deepcopy
from os import path, unlink
from threading import Lock
import re import re
import requests import requests
import secrets import secrets
import threading
from . model import App, Watch import time
import uuid as uuid_builder
# Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods? # Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods?
# Open a github issue if you know something :) # Open a github issue if you know something :)
@@ -36,7 +36,6 @@ class ChangeDetectionStore:
self.datastore_path = datastore_path self.datastore_path = datastore_path
self.json_store_path = "{}/url-watches.json".format(self.datastore_path) self.json_store_path = "{}/url-watches.json".format(self.datastore_path)
self.needs_write = False self.needs_write = False
self.proxy_list = None
self.start_time = time.time() self.start_time = time.time()
self.stop_thread = False self.stop_thread = False
# Base definition for all watchers # Base definition for all watchers
@@ -78,10 +77,10 @@ class ChangeDetectionStore:
self.__data['watching'][uuid] = Watch.model(datastore_path=self.datastore_path, default=watch) self.__data['watching'][uuid] = Watch.model(datastore_path=self.datastore_path, default=watch)
print("Watching:", uuid, self.__data['watching'][uuid]['url']) print("Watching:", uuid, self.__data['watching'][uuid]['url'])
# First time ran, doesnt exist. # First time ran, Create the datastore.
except (FileNotFoundError, json.decoder.JSONDecodeError): except (FileNotFoundError):
if include_default_watches: if include_default_watches:
print("Creating JSON store at", self.datastore_path) print("No JSON DB found at {}, creating JSON store at {}".format(self.json_store_path, self.datastore_path))
self.add_watch(url='https://news.ycombinator.com/', self.add_watch(url='https://news.ycombinator.com/',
tag='Tech news', tag='Tech news',
extras={'fetch_backend': 'html_requests'}) extras={'fetch_backend': 'html_requests'})
@@ -89,9 +88,11 @@ class ChangeDetectionStore:
self.add_watch(url='https://changedetection.io/CHANGELOG.txt', self.add_watch(url='https://changedetection.io/CHANGELOG.txt',
tag='changedetection.io', tag='changedetection.io',
extras={'fetch_backend': 'html_requests'}) extras={'fetch_backend': 'html_requests'})
self.__data['version_tag'] = version_tag self.__data['version_tag'] = version_tag
# Just to test that proxies.json if it exists, doesnt throw a parsing error on startup
test_list = self.proxy_list
# Helper to remove password protection # Helper to remove password protection
password_reset_lockfile = "{}/removepassword.lock".format(self.datastore_path) password_reset_lockfile = "{}/removepassword.lock".format(self.datastore_path)
if path.isfile(password_reset_lockfile): if path.isfile(password_reset_lockfile):
@@ -116,11 +117,6 @@ class ChangeDetectionStore:
secret = secrets.token_hex(16) secret = secrets.token_hex(16)
self.__data['settings']['application']['api_access_token'] = secret self.__data['settings']['application']['api_access_token'] = secret
# Proxy list support - available as a selection in settings when text file is imported
proxy_list_file = "{}/proxies.json".format(self.datastore_path)
if path.isfile(proxy_list_file):
self.import_proxy_list(proxy_list_file)
# Bump the update version by running updates # Bump the update version by running updates
self.run_updates() self.run_updates()
@@ -175,14 +171,6 @@ class ChangeDetectionStore:
@property @property
def data(self): def data(self):
has_unviewed = False
for uuid, watch in self.__data['watching'].items():
# #106 - Be sure this is None on empty string, False, None, etc
# Default var for fetch_backend
# @todo this may not be needed anymore, or could be easily removed
if not self.__data['watching'][uuid]['fetch_backend']:
self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend']
# Re #152, Return env base_url if not overriden, @todo also prefer the proxy pass url # Re #152, Return env base_url if not overriden, @todo also prefer the proxy pass url
env_base_url = os.getenv('BASE_URL','') env_base_url = os.getenv('BASE_URL','')
if not self.__data['settings']['application']['base_url']: if not self.__data['settings']['application']['base_url']:
@@ -204,30 +192,28 @@ class ChangeDetectionStore:
tags.sort() tags.sort()
return tags return tags
def unlink_history_file(self, path):
try:
unlink(path)
except (FileNotFoundError, IOError):
pass
# Delete a single watch by UUID # Delete a single watch by UUID
def delete(self, uuid): def delete(self, uuid):
import pathlib
import shutil
with self.lock: with self.lock:
if uuid == 'all': if uuid == 'all':
self.__data['watching'] = {} self.__data['watching'] = {}
# GitHub #30 also delete history records # GitHub #30 also delete history records
for uuid in self.data['watching']: for uuid in self.data['watching']:
for path in self.data['watching'][uuid].history.values(): path = pathlib.Path(os.path.join(self.datastore_path, uuid))
self.unlink_history_file(path) if os.path.exists(path):
shutil.rmtree(path)
else: else:
for path in self.data['watching'][uuid].history.values(): path = pathlib.Path(os.path.join(self.datastore_path, uuid))
self.unlink_history_file(path) if os.path.exists(path):
shutil.rmtree(path)
del self.data['watching'][uuid] del self.data['watching'][uuid]
self.needs_write_urgent = True self.needs_write_urgent = True
# Clone a watch by UUID # Clone a watch by UUID
def clone(self, uuid): def clone(self, uuid):
@@ -250,12 +236,15 @@ class ChangeDetectionStore:
def clear_watch_history(self, uuid): def clear_watch_history(self, uuid):
import pathlib import pathlib
self.__data['watching'][uuid].update( self.__data['watching'][uuid].update({
{'last_checked': 0, 'last_checked': 0,
'last_viewed': 0, 'has_ldjson_price_data': None,
'previous_md5': False, 'last_error': False,
'last_notification_error': False, 'last_notification_error': False,
'last_error': False}) 'last_viewed': 0,
'previous_md5': False,
'track_ldjson_price_data': None,
})
# JSON Data, Screenshots, Textfiles (history index and snapshots), HTML in the future etc # JSON Data, Screenshots, Textfiles (history index and snapshots), HTML in the future etc
for item in pathlib.Path(os.path.join(self.datastore_path, uuid)).rglob("*.*"): for item in pathlib.Path(os.path.join(self.datastore_path, uuid)).rglob("*.*"):
@@ -299,6 +288,7 @@ class ChangeDetectionStore:
'method', 'method',
'paused', 'paused',
'previous_md5', 'previous_md5',
'processor',
'subtractive_selectors', 'subtractive_selectors',
'tag', 'tag',
'text_should_not_be_present', 'text_should_not_be_present',
@@ -318,13 +308,17 @@ class ChangeDetectionStore:
logging.error("Error fetching metadata for shared watch link", url, str(e)) logging.error("Error fetching metadata for shared watch link", url, str(e))
flash("Error fetching metadata for {}".format(url), 'error') flash("Error fetching metadata for {}".format(url), 'error')
return False return False
from .model.Watch import is_safe_url
if not is_safe_url(url):
flash('Watch protocol is not permitted by SAFE_PROTOCOL_REGEX', 'error')
return None
with self.lock: with self.lock:
# #Re 569 # #Re 569
new_watch = Watch.model(datastore_path=self.datastore_path, default={ new_watch = Watch.model(datastore_path=self.datastore_path, default={
'url': url, 'url': url,
'tag': tag 'tag': tag,
'date_created': int(time.time())
}) })
new_uuid = new_watch['uuid'] new_uuid = new_watch['uuid']
@@ -369,28 +363,25 @@ class ChangeDetectionStore:
f.write(screenshot) f.write(screenshot)
f.close() f.close()
# Make a JPEG that's used in notifications (due to being a smaller size) available
from PIL import Image
im1 = Image.open(target_path)
im1.convert('RGB').save(target_path.replace('.png','.jpg'), quality=int(os.getenv("NOTIFICATION_SCREENSHOT_JPG_QUALITY", 75)))
def save_error_text(self, watch_uuid, contents): def save_error_text(self, watch_uuid, contents):
if not self.data['watching'].get(watch_uuid): if not self.data['watching'].get(watch_uuid):
return return
target_path = os.path.join(self.datastore_path, watch_uuid, "last-error.txt")
self.data['watching'][watch_uuid].ensure_data_dir_exists()
target_path = os.path.join(self.datastore_path, watch_uuid, "last-error.txt")
with open(target_path, 'w') as f: with open(target_path, 'w') as f:
f.write(contents) f.write(contents)
def save_xpath_data(self, watch_uuid, data, as_error=False): def save_xpath_data(self, watch_uuid, data, as_error=False):
if not self.data['watching'].get(watch_uuid): if not self.data['watching'].get(watch_uuid):
return return
if as_error: if as_error:
target_path = os.path.join(self.datastore_path, watch_uuid, "elements-error.json") target_path = os.path.join(self.datastore_path, watch_uuid, "elements-error.json")
else: else:
target_path = os.path.join(self.datastore_path, watch_uuid, "elements.json") target_path = os.path.join(self.datastore_path, watch_uuid, "elements.json")
self.data['watching'][watch_uuid].ensure_data_dir_exists()
with open(target_path, 'w') as f: with open(target_path, 'w') as f:
f.write(json.dumps(data)) f.write(json.dumps(data))
f.close() f.close()
@@ -460,10 +451,28 @@ class ChangeDetectionStore:
print ("Removing",item) print ("Removing",item)
unlink(item) unlink(item)
def import_proxy_list(self, filename): @property
with open(filename) as f: def proxy_list(self):
self.proxy_list = json.load(f) proxy_list = {}
print ("Registered proxy list", list(self.proxy_list.keys())) proxy_list_file = os.path.join(self.datastore_path, 'proxies.json')
# Load from external config file
if path.isfile(proxy_list_file):
with open("{}/proxies.json".format(self.datastore_path)) as f:
proxy_list = json.load(f)
# Mapping from UI config if available
extras = self.data['settings']['requests'].get('extra_proxies')
if extras:
i=0
for proxy in extras:
i += 0
if proxy.get('proxy_name') and proxy.get('proxy_url'):
k = "ui-" + str(i) + proxy.get('proxy_name')
proxy_list[k] = {'label': proxy.get('proxy_name'), 'url': proxy.get('proxy_url')}
return proxy_list if len(proxy_list) else None
def get_preferred_proxy_for_watch(self, uuid): def get_preferred_proxy_for_watch(self, uuid):
@@ -473,11 +482,10 @@ class ChangeDetectionStore:
:return: proxy "key" id :return: proxy "key" id
""" """
proxy_id = None
if self.proxy_list is None: if self.proxy_list is None:
return None return None
# If its a valid one # If it's a valid one
watch = self.data['watching'].get(uuid) watch = self.data['watching'].get(uuid)
if watch.get('proxy') and watch.get('proxy') in list(self.proxy_list.keys()): if watch.get('proxy') and watch.get('proxy') in list(self.proxy_list.keys()):
@@ -490,13 +498,33 @@ class ChangeDetectionStore:
if self.proxy_list.get(system_proxy_id): if self.proxy_list.get(system_proxy_id):
return system_proxy_id return system_proxy_id
# Fallback - Did not resolve anything, use the first available
if system_proxy_id is None: # Fallback - Did not resolve anything, or doesnt exist, use the first available
if system_proxy_id is None or not self.proxy_list.get(system_proxy_id):
first_default = list(self.proxy_list)[0] first_default = list(self.proxy_list)[0]
return first_default return first_default
return None return None
@property
def has_extra_headers_file(self):
filepath = os.path.join(self.datastore_path, 'headers.txt')
return os.path.isfile(filepath)
def get_all_headers(self):
from .model.App import parse_headers_from_text_file
headers = copy(self.data['settings'].get('headers', {}))
filepath = os.path.join(self.datastore_path, 'headers.txt')
try:
if os.path.isfile(filepath):
headers.update(parse_headers_from_text_file(filepath))
except Exception as e:
print(f"ERROR reading headers.txt at {filepath}", str(e))
return headers
# Run all updates # Run all updates
# IMPORTANT - Each update could be run even when they have a new install and the schema is correct # IMPORTANT - Each update could be run even when they have a new install and the schema is correct
# So therefor - each `update_n` should be very careful about checking if it needs to actually run # So therefor - each `update_n` should be very careful about checking if it needs to actually run
@@ -662,3 +690,23 @@ class ChangeDetectionStore:
self.data['settings']['application']['notification_urls'][i] = re.sub(r, r'{{\1}}', url) self.data['settings']['application']['notification_urls'][i] = re.sub(r, r'{{\1}}', url)
return return
# Some setups may have missed the correct default, so it shows the wrong config in the UI, although it will default to system-wide
def update_10(self):
for uuid, watch in self.data['watching'].items():
try:
if not watch.get('fetch_backend', ''):
watch['fetch_backend'] = 'system'
except:
continue
return
# We don't know when the date_created was in the past until now, so just add an index number for now.
def update_11(self):
i = 0
for uuid, watch in self.data['watching'].items():
if not watch.get('date_created'):
watch['date_created'] = i
i+=1
return

View File

@@ -17,14 +17,15 @@
<li><code>tgram://</code> bots cant send messages to other bots, so you should specify chat ID of non-bot user.</li> <li><code>tgram://</code> bots cant send messages to other bots, so you should specify chat ID of non-bot user.</li>
<li><code>tgram://</code> only supports very limited HTML and can fail when extra tags are sent, <a href="https://core.telegram.org/bots/api#html-style">read more here</a> (or use plaintext/markdown format)</li> <li><code>tgram://</code> only supports very limited HTML and can fail when extra tags are sent, <a href="https://core.telegram.org/bots/api#html-style">read more here</a> (or use plaintext/markdown format)</li>
<li><code>gets://</code>, <code>posts://</code>, <code>puts://</code>, <code>deletes://</code> for direct API calls (or omit the "<code>s</code>" for non-SSL ie <code>get://</code>)</li> <li><code>gets://</code>, <code>posts://</code>, <code>puts://</code>, <code>deletes://</code> for direct API calls (or omit the "<code>s</code>" for non-SSL ie <code>get://</code>)</li>
<li>Accepts the <code>{{ '{{token}}' }}</code> placeholders listed below</li>
</ul> </ul>
</div> </div>
<div class="notifications-wrapper"> <div class="notifications-wrapper">
<a id="send-test-notification" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Send test notification</a> <a id="send-test-notification" class="pure-button button-secondary button-xsmall" >Send test notification</a>
{% if emailprefix %} {% if emailprefix %}
<a id="add-email-helper" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Add email</a> <a id="add-email-helper" class="pure-button button-secondary button-xsmall" >Add email <img style="height: 1em; display: inline-block" src="{{url_for('static_content', group='images', filename='email.svg')}}" alt="Add an email address"> </a>
{% endif %} {% endif %}
<a href="{{url_for('notification_logs')}}" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Notification debug logs</a> <a href="{{url_for('notification_logs')}}" class="pure-button button-secondary button-xsmall" >Notification debug logs</a>
</div> </div>
</div> </div>
<div id="notification-customisation" class="pure-control-group"> <div id="notification-customisation" class="pure-control-group">
@@ -55,48 +56,66 @@
</thead> </thead>
<tbody> <tbody>
<tr> <tr>
<td><code>{{ '{{ base_url }}' }}</code></td> <td><code>{{ '{{base_url}}' }}</code></td>
<td>The URL of the changedetection.io instance you are running.</td> <td>The URL of the changedetection.io instance you are running.</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ watch_url }}' }}</code></td> <td><code>{{ '{{watch_url}}' }}</code></td>
<td>The URL being watched.</td> <td>The URL being watched.</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ watch_uuid }}' }}</code></td> <td><code>{{ '{{watch_uuid}}' }}</code></td>
<td>The UUID of the watch.</td> <td>The UUID of the watch.</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ watch_title }}' }}</code></td> <td><code>{{ '{{watch_title}}' }}</code></td>
<td>The title of the watch.</td> <td>The title of the watch.</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ watch_tag }}' }}</code></td> <td><code>{{ '{{watch_tag}}' }}</code></td>
<td>The watch label / tag</td> <td>The watch label / tag</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ preview_url }}' }}</code></td> <td><code>{{ '{{preview_url}}' }}</code></td>
<td>The URL of the preview page generated by changedetection.io.</td> <td>The URL of the preview page generated by changedetection.io.</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ diff_url }}' }}</code></td> <td><code>{{ '{{diff_url}}' }}</code></td>
<td>The diff output - differences only</td> <td>The URL of the diff output for the watch.</td>
</tr>
<tr>
<td><code>{{ '{{diff}}' }}</code></td>
<td>The diff output - only changes, additions, and removals</td>
</tr>
<tr>
<td><code>{{ '{{diff_added}}' }}</code></td>
<td>The diff output - only changes and additions</td>
</tr>
<tr>
<td><code>{{ '{{diff_removed}}' }}</code></td>
<td>The diff output - only changes and removals</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ diff_full }}' }}</code></td> <td><code>{{ '{{diff_full}}' }}</code></td>
<td>The diff output - full difference output</td> <td>The diff output - full difference output</td>
</tr> </tr>
<tr> <tr>
<td><code>{{ '{{ current_snapshot }}' }}</code></td> <td><code>{{ '{{current_snapshot}}' }}</code></td>
<td>The current snapshot value, useful when combined with JSON or CSS filters <td>The current snapshot value, useful when combined with JSON or CSS filters
</td> </td>
</tr> </tr>
<tr>
<td><code>{{ '{{triggered_text}}' }}</code></td>
<td>Text that tripped the trigger from filters</td>
</tr>
</tbody> </tbody>
</table> </table>
<div class="pure-form-message-inline"> <div class="pure-form-message-inline">
<br> <br>
URLs generated by changedetection.io (such as <code>{{ '{{ diff_url }}' }}</code>) require the <code>BASE_URL</code> environment variable set.<br/> URLs generated by changedetection.io (such as <code>{{ '{{diff_url}}' }}</code>) require the <code>BASE_URL</code> environment variable set.<br>
Your <code>BASE_URL</code> var is currently "{{settings_application['current_base_url']}}" Your <code>BASE_URL</code> var is currently "{{settings_application['current_base_url']}}"
<br>
Warning: Contents of <code>{{ '{{diff}}' }}</code>, <code>{{ '{{diff_removed}}' }}</code>, and <code>{{ '{{diff_added}}' }}</code> depend on how the difference algorithm perceives the change. For example, an addition or removal could be perceived as a change in some cases. <a target="_new" href="https://github.com/dgtlmoon/changedetection.io/wiki/Using-the-%7B%7Bdiff%7D%7D,-%7B%7Bdiff_added%7D%7D,-and-%7B%7Bdiff_removed%7D%7D-notification-tokens">More Here</a> <br>
</div> </div>
</div> </div>
</div> </div>

View File

@@ -1,7 +0,0 @@
{% macro pagination(sorted_watches, total_per_page, current_page) %}
{{ sorted_watches|length }}
{% for row in sorted_watches|batch(total_per_page, '&nbsp;') %}
{{ loop.index}}
{% endfor %}
{% endmacro %}

View File

@@ -2,35 +2,35 @@
<html lang="en" data-darkmode="{{ get_darkmode_state() }}"> <html lang="en" data-darkmode="{{ get_darkmode_state() }}">
<head> <head>
<meta charset="utf-8"/> <meta charset="utf-8" >
<meta name="viewport" content="width=device-width, initial-scale=1.0"/> <meta name="viewport" content="width=device-width, initial-scale=1.0" >
<meta name="description" content="Self hosted website change detection."/> <meta name="description" content="Self hosted website change detection." >
<title>Change Detection{{extra_title}}</title> <title>Change Detection{{extra_title}}</title>
<link rel="alternate" type="application/rss+xml" title="Changedetection.io » Feed{% if active_tag %}- {{active_tag}}{% endif %}" href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}"/> <link rel="alternate" type="application/rss+xml" title="Changedetection.io » Feed{% if active_tag %}- {{active_tag}}{% endif %}" href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}" >
<link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='pure-min.css')}}"/> <link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='pure-min.css')}}" >
<link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='styles.css')}}"/> <link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='styles.css')}}" >
{% if extra_stylesheets %} {% if extra_stylesheets %}
{% for m in extra_stylesheets %} {% for m in extra_stylesheets %}
<link rel="stylesheet" href="{{ m }}?ver=1000"/> <link rel="stylesheet" href="{{ m }}?ver=1000" >
{% endfor %} {% endfor %}
{% endif %} {% endif %}
<link rel="apple-touch-icon" sizes="180x180" href="{{url_for('static_content', group='favicons', filename='apple-touch-icon.png')}}"/> <link rel="apple-touch-icon" sizes="180x180" href="{{url_for('static_content', group='favicons', filename='apple-touch-icon.png')}}">
<link rel="icon" type="image/png" sizes="32x32" href="{{url_for('static_content', group='favicons', filename='favicon-32x32.png')}}"/> <link rel="icon" type="image/png" sizes="32x32" href="{{url_for('static_content', group='favicons', filename='favicon-32x32.png')}}">
<link rel="icon" type="image/png" sizes="16x16" href="{{url_for('static_content', group='favicons', filename='favicon-16x16.png')}}"/> <link rel="icon" type="image/png" sizes="16x16" href="{{url_for('static_content', group='favicons', filename='favicon-16x16.png')}}">
<link rel="manifest" href="{{url_for('static_content', group='favicons', filename='site.webmanifest')}}"/> <link rel="manifest" href="{{url_for('static_content', group='favicons', filename='site.webmanifest')}}">
<link rel="mask-icon" href="{{url_for('static_content', group='favicons', filename='safari-pinned-tab.svg')}}" color="#5bbad5"/> <link rel="mask-icon" href="{{url_for('static_content', group='favicons', filename='safari-pinned-tab.svg')}}" color="#5bbad5">
<link rel="shortcut icon" href="{{url_for('static_content', group='favicons', filename='favicon.ico')}}"/> <link rel="shortcut icon" href="{{url_for('static_content', group='favicons', filename='favicon.ico')}}">
<meta name="msapplication-TileColor" content="#da532c"/> <meta name="msapplication-TileColor" content="#da532c">
<meta name="msapplication-config" content="favicons/browserconfig.xml"/> <meta name="msapplication-config" content="favicons/browserconfig.xml">
<meta name="theme-color" content="#ffffff"/> <meta name="theme-color" content="#ffffff">
<style> <style>
body::before { body::before {
background-image: url({{url_for('static_content', group='images', filename='gradient-border.png') }}); background-image: url({{url_for('static_content', group='images', filename='gradient-border.png') }});
} }
</style> </style>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script> <script src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
</head> </head>
<body> <body>
@@ -82,11 +82,21 @@
<a href="{{url_for('logout')}}" class="pure-menu-link">LOG OUT</a> <a href="{{url_for('logout')}}" class="pure-menu-link">LOG OUT</a>
</li> </li>
{% endif %} {% endif %}
<li class="pure-menu-item pure-form" id="search-menu-item">
<!-- We use GET here so it offers people a chance to set bookmarks etc -->
<form name="searchForm" action="" method="GET">
<input id="search-q" class="" name="q" placeholder="URL or Title {% if active_tag %}in '{{ active_tag }}'{% endif %}" required="" type="text" value="">
<input name="tag" type="hidden" value="{% if active_tag %}{{active_tag}}{% endif %}">
<button class="toggle-button " id="toggle-search" type="button" title="Search, or Use Alt+S Key" >
{% include "svgs/search-icon.svg" %}
</button>
</form>
</li>
<li class="pure-menu-item"> <li class="pure-menu-item">
{% if dark_mode %} {% if dark_mode %}
{% set darkClass = 'dark' %} {% set darkClass = 'dark' %}
{% endif %} {% endif %}
<button class="toggle-theme {{darkClass}}" type="button" title="Toggle Light/Dark Mode"> <button class="toggle-button {{darkClass}}" id ="toggle-light-mode" type="button" title="Toggle Light/Dark Mode">
<span class="visually-hidden">Toggle light/dark mode</span> <span class="visually-hidden">Toggle light/dark mode</span>
<span class="icon-light"> <span class="icon-light">
{% include "svgs/light-mode-toggle-icon.svg" %} {% include "svgs/light-mode-toggle-icon.svg" %}
@@ -106,7 +116,7 @@
</div> </div>
{% if hosted_sticky %} {% if hosted_sticky %}
<div class="sticky-tab" id="hosted-sticky"> <div class="sticky-tab" id="hosted-sticky">
<a href="https://lemonade.changedetection.io/start?ref={{guid}}">Let us host your instance!</a> <a href="https://changedetection.io/?ref={{guid}}">Let us host your instance!</a>
</div> </div>
{% endif %} {% endif %}
{% if left_sticky %} {% if left_sticky %}
@@ -137,16 +147,13 @@
<li class="message"> <li class="message">
Share this link: Share this link:
<span id="share-link">{{ session['share-link'] }}</span> <span id="share-link">{{ session['share-link'] }}</span>
<img style="height: 1em; display: inline-block" src="{{url_for('static_content', group='images', filename='copy.svg')}}"/> <img style="height: 1em; display: inline-block" src="{{url_for('static_content', group='images', filename='copy.svg')}}" >
</li> </li>
</ul> </ul>
{% endif %} {% endif %}
{% block content %}{% endblock %} {% block content %}{% endblock %}
</section> </section>
<script <script src="{{url_for('static_content', group='js', filename='toggle-theme.js')}}" defer></script>
type="text/javascript"
src="{{url_for('static_content', group='js', filename='toggle-theme.js')}}"
defer></script>
</body> </body>
</html> </html>

View File

@@ -6,7 +6,7 @@
action="{{url_for('clear_all_history')}}" action="{{url_for('clear_all_history')}}"
method="POST" method="POST"
> >
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" /> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
This will remove version history (snapshots) for ALL watches, but keep This will remove version history (snapshots) for ALL watches, but keep

View File

@@ -7,7 +7,7 @@
const error_screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid, error_screenshot=1) }}"; const error_screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid, error_screenshot=1) }}";
{% endif %} {% endif %}
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
<div id="settings"> <div id="settings">
<h1>Differences</h1> <h1>Differences</h1>
@@ -15,15 +15,15 @@
<fieldset> <fieldset>
<label for="diffWords" class="pure-checkbox"> <label for="diffWords" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffWords" value="diffWords"/> Words</label> <input type="radio" name="diff_type" id="diffWords" value="diffWords"> Words</label>
<label for="diffLines" class="pure-checkbox"> <label for="diffLines" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffLines" value="diffLines" checked=""/> Lines</label> <input type="radio" name="diff_type" id="diffLines" value="diffLines" checked=""> Lines</label>
<label for="diffChars" class="pure-checkbox"> <label for="diffChars" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffChars" value="diffChars"/> Chars</label> <input type="radio" name="diff_type" id="diffChars" value="diffChars"> Chars</label>
<!-- @todo - when mimetype is JSON, select this by default? --> <!-- @todo - when mimetype is JSON, select this by default? -->
<label for="diffJson" class="pure-checkbox"> <label for="diffJson" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffJson" value="diffJson" /> JSON</label> <input type="radio" name="diff_type" id="diffJson" value="diffJson" > JSON</label>
{% if versions|length >= 1 %} {% if versions|length >= 1 %}
<label for="diff-version">Compare newest (<span id="current-v-date"></span>) with</label> <label for="diff-version">Compare newest (<span id="current-v-date"></span>) with</label>
@@ -43,7 +43,7 @@
<span> <span>
<!-- https://github.com/kpdecker/jsdiff/issues/389 ? --> <!-- https://github.com/kpdecker/jsdiff/issues/389 ? -->
<label for="ignoreWhitespace" class="pure-checkbox" id="label-diff-ignorewhitespace"> <label for="ignoreWhitespace" class="pure-checkbox" id="label-diff-ignorewhitespace">
<input type="checkbox" id="ignoreWhitespace" name="ignoreWhitespace"/> Ignore Whitespace</label> <input type="checkbox" id="ignoreWhitespace" name="ignoreWhitespace" > Ignore Whitespace</label>
</span> </span>
</div> </div>
@@ -51,7 +51,7 @@
<a onclick="next_diff();">Jump</a> <a onclick="next_diff();">Jump</a>
</div> </div>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="tabs"> <div class="tabs">
<ul> <ul>
{% if last_error_text %}<li class="tab" id="error-text-tab"><a href="#error-text">Error Text</a></li> {% endif %} {% if last_error_text %}<li class="tab" id="error-text-tab"><a href="#error-text">Error Text</a></li> {% endif %}
@@ -72,12 +72,16 @@
<div class="tab-pane-inner" id="error-screenshot"> <div class="tab-pane-inner" id="error-screenshot">
<div class="snapshot-age error">{{watch_a.snapshot_error_screenshot_ctime|format_seconds_ago}} seconds ago</div> <div class="snapshot-age error">{{watch_a.snapshot_error_screenshot_ctime|format_seconds_ago}} seconds ago</div>
<img id="error-screenshot-img" style="max-width: 80%" alt="Current error-ing screenshot from most recent request"/> <img id="error-screenshot-img" style="max-width: 80%" alt="Current error-ing screenshot from most recent request" >
</div> </div>
<div class="tab-pane-inner" id="text"> <div class="tab-pane-inner" id="text">
<div class="tip">Pro-tip: Use <strong>show current snapshot</strong> tab to visualise what will be ignored. <div class="tip">Pro-tip: Use <strong>show current snapshot</strong> tab to visualise what will be ignored.</div>
</div>
{% if password_enabled_and_share_is_off %}
<div class="tip">Pro-tip: You can enable <strong>"share access when password is enabled"</strong> from settings</div>
{% endif %}
<div class="snapshot-age">{{watch_a.snapshot_text_ctime|format_timestamp_timeago}}</div> <div class="snapshot-age">{{watch_a.snapshot_text_ctime|format_timestamp_timeago}}</div>
<table> <table>
@@ -101,7 +105,7 @@
{% if is_html_webdriver %} {% if is_html_webdriver %}
{% if screenshot %} {% if screenshot %}
<div class="snapshot-age">{{watch_a.snapshot_screenshot_ctime|format_timestamp_timeago}}</div> <div class="snapshot-age">{{watch_a.snapshot_screenshot_ctime|format_timestamp_timeago}}</div>
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/> <img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request" >
{% else %} {% else %}
No screenshot available just yet! Try rechecking the page. No screenshot available just yet! Try rechecking the page.
{% endif %} {% endif %}
@@ -113,19 +117,19 @@
<form id="extract-data-form" class="pure-form pure-form-stacked edit-form" <form id="extract-data-form" class="pure-form pure-form-stacked edit-form"
action="{{ url_for('diff_history_page', uuid=uuid) }}#extract" action="{{ url_for('diff_history_page', uuid=uuid) }}#extract"
method="POST"> method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<p>This tool will extract text data from all of the watch history.</p> <p>This tool will extract text data from all of the watch history.</p>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(extract_form.extract_regex) }} {{ render_field(extract_form.extract_regex) }}
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
A <strong>RegEx</strong> is a pattern that identifies exactly which part inside of the text that you want to extract.<br/> A <strong>RegEx</strong> is a pattern that identifies exactly which part inside of the text that you want to extract.<br>
<p> <p>
For example, to extract only the numbers from text &dash;</br> For example, to extract only the numbers from text &dash;<br>
<strong>Raw text</strong>: <code>Temperature <span style="color: red">5.5</span>°C in Sydney</code></br> <strong>Raw text</strong>: <code>Temperature <span style="color: red">5.5</span>°C in Sydney</code><br>
<strong>RegEx to extract:</strong> <code>Temperature <span style="color: red">([0-9\.]+)</span></code><br/> <strong>RegEx to extract:</strong> <code>Temperature <span style="color: red">([0-9\.]+)</span></code><br>
</p> </p>
<p> <p>
<a href="https://RegExr.com/">Be sure to test your RegEx here.</a> <a href="https://RegExr.com/">Be sure to test your RegEx here.</a>
@@ -145,9 +149,9 @@
<script> <script>
const newest_version_timestamp = {{newest_version_timestamp}}; const newest_version_timestamp = {{newest_version_timestamp}};
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff.min.js')}}"></script> <script src="{{url_for('static_content', group='js', filename='diff.min.js')}}"></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-render.js')}}"></script> <script src="{{url_for('static_content', group='js', filename='diff-render.js')}}"></script>
{% endblock %} {% endblock %}

View File

@@ -2,7 +2,7 @@
{% block content %} {% block content %}
{% from '_helpers.jinja' import render_field, render_checkbox_field, render_button %} {% from '_helpers.jinja' import render_field, render_checkbox_field, render_button %}
{% from '_common_fields.jinja' import render_common_settings_form %} {% from '_common_fields.jinja' import render_common_settings_form %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script> <script>
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}"; const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
const watch_visual_selector_data_url="{{url_for('static_content', group='visual_selector_data', filename=uuid)}}"; const watch_visual_selector_data_url="{{url_for('static_content', group='visual_selector_data', filename=uuid)}}";
@@ -14,15 +14,17 @@
{% endif %} {% endif %}
const browser_steps_config=JSON.parse('{{ browser_steps_config|tojson }}'); const browser_steps_config=JSON.parse('{{ browser_steps_config|tojson }}');
const browser_steps_start_url="{{url_for('browser_steps.browsersteps_start_session', uuid=uuid)}}";
const browser_steps_sync_url="{{url_for('browser_steps.browsersteps_ui_update', uuid=uuid)}}"; const browser_steps_sync_url="{{url_for('browser_steps.browsersteps_ui_update', uuid=uuid)}}";
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='limit.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='limit.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='visual-selector.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='visual-selector.js')}}" defer></script>
{% if playwright_enabled %} {% if playwright_enabled %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='browser-steps.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='browser-steps.js')}}" defer></script>
{% endif %} {% endif %}
<div class="edit-form monospaced-textarea"> <div class="edit-form monospaced-textarea">
@@ -34,8 +36,15 @@
{% if playwright_enabled %} {% if playwright_enabled %}
<li class="tab"><a id="browsersteps-tab" href="#browser-steps">Browser Steps</a></li> <li class="tab"><a id="browsersteps-tab" href="#browser-steps">Browser Steps</a></li>
{% endif %} {% endif %}
{% if watch['processor'] == 'text_json_diff' %}
<li class="tab"><a id="visualselector-tab" href="#visualselector">Visual Filter Selector</a></li> <li class="tab"><a id="visualselector-tab" href="#visualselector">Visual Filter Selector</a></li>
<li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li> <li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li>
{% endif %}
{% if watch['processor'] == 'restock_diff' %}
<li class="tab"><a href="#restock">Restock Detection</a></li>
{% endif %}
<li class="tab"><a href="#notifications">Notifications</a></li> <li class="tab"><a href="#notifications">Notifications</a></li>
</ul> </ul>
</div> </div>
@@ -43,14 +52,24 @@
<div class="box-wrap inner"> <div class="box-wrap inner">
<form class="pure-form pure-form-stacked" <form class="pure-form pure-form-stacked"
action="{{ url_for('edit_page', uuid=uuid, next = request.args.get('next'), unpause_on_save = request.args.get('unpause_on_save')) }}" method="POST"> action="{{ url_for('edit_page', uuid=uuid, next = request.args.get('next'), unpause_on_save = request.args.get('unpause_on_save')) }}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="tab-pane-inner" id="general"> <div class="tab-pane-inner" id="general">
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.url, placeholder="https://...", required=true, class="m-d") }} {{ render_field(form.url, placeholder="https://...", required=true, class="m-d") }}
<span class="pure-form-message-inline">Some sites use JavaScript to create the content, for this you should <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">use the Chrome/WebDriver Fetcher</a></span><br/> <span class="pure-form-message-inline">Some sites use JavaScript to create the content, for this you should <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">use the Chrome/WebDriver Fetcher</a></span><br>
<span class="pure-form-message-inline">You can use variables in the URL, perfect for inserting the current date and other logic, <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Handling-variables-in-the-watched-URL">help and examples here</a></span><br/> <span class="pure-form-message-inline">You can use variables in the URL, perfect for inserting the current date and other logic, <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Handling-variables-in-the-watched-URL">help and examples here</a></span><br>
<span class="pure-form-message-inline">
{% if watch['processor'] == 'text_json_diff' %}
Current mode: <strong>Webpage Text/HTML, JSON and PDF changes.</strong><br>
<a href="{{url_for('edit_page', uuid=uuid)}}?switch_processor=restock_diff" class="pure-button button-xsmall">Switch to re-stock detection mode.</a>
{% else %}
Current mode: <strong>Re-stock detection.</strong><br>
<a href="{{url_for('edit_page', uuid=uuid)}}?switch_processor=text_json_diff" class="pure-button button-xsmall">Switch to Webpage Text/HTML, JSON and PDF changes mode.</a>
{% endif %}
</span>
</div> </div>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.title, class="m-d") }} {{ render_field(form.title, class="m-d") }}
@@ -106,10 +125,10 @@
{{ render_field(form.webdriver_delay) }} {{ render_field(form.webdriver_delay) }}
<div class="pure-form-message-inline"> <div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong> <strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/> <br>
This will wait <i>n</i> seconds before extracting the text. This will wait <i>n</i> seconds before extracting the text.
{% if using_global_webdriver_wait %} {% if using_global_webdriver_wait %}
<br/><strong>Using the current global default settings</strong> <br><strong>Using the current global default settings</strong>
{% endif %} {% endif %}
</div> </div>
</div> </div>
@@ -133,6 +152,17 @@
{{ render_field(form.headers, rows=5, placeholder="Example {{ render_field(form.headers, rows=5, placeholder="Example
Cookie: foobar Cookie: foobar
User-Agent: wonderbra 1.0") }} User-Agent: wonderbra 1.0") }}
<div class="pure-form-message-inline">
{% if has_extra_headers_file %}
<strong>Alert! Extra headers file found and will be added to this watch!</strong>
{% else %}
Headers can be also read from a file in your data-directory <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Adding-headers-from-an-external-file">Read more here</a>
{% endif %}
<br>
(Not supported by Selenium browser)
</div>
</div> </div>
<div class="pure-control-group" id="request-body"> <div class="pure-control-group" id="request-body">
{{ render_field(form.body, rows=5, placeholder="Example {{ render_field(form.body, rows=5, placeholder="Example
@@ -146,7 +176,7 @@ User-Agent: wonderbra 1.0") }}
</div> </div>
{% if playwright_enabled %} {% if playwright_enabled %}
<div class="tab-pane-inner" id="browser-steps"> <div class="tab-pane-inner" id="browser-steps">
<img class="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}"> <img class="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}" alt="New beta functionality">
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
<!-- <!--
@@ -169,11 +199,12 @@ User-Agent: wonderbra 1.0") }}
<span class="loader" > <span class="loader" >
<span id="browsersteps-click-start"> <span id="browsersteps-click-start">
<h2 >Click here to Start</h2> <h2 >Click here to Start</h2>
Please allow 10-15 seconds for the browser to connect. <svg style="height: 3.5rem;" version="1.1" viewBox="0 0 32 32" xml:space="preserve" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink"><g id="Layer_1"/><g id="play_x5F_alt"><path d="M16,0C7.164,0,0,7.164,0,16s7.164,16,16,16s16-7.164,16-16S24.836,0,16,0z M10,24V8l16.008,8L10,24z" style="fill: var(--color-grey-400);"/></g></svg><br>
Please allow 10-15 seconds for the browser to connect.<br>
</span> </span>
<div class="spinner" style="display: none;"></div> <div class="spinner" style="display: none;"></div>
</span> </span>
<img class="noselect" id="browsersteps-img" src="" style="max-width: 100%; width: 100%;" /> <img class="noselect" id="browsersteps-img" src="" style="max-width: 100%; width: 100%;" >
<canvas class="noselect" id="browsersteps-selector-canvas" style="max-width: 100%; width: 100%;"></canvas> <canvas class="noselect" id="browsersteps-selector-canvas" style="max-width: 100%; width: 100%;"></canvas>
</div> </div>
</div> </div>
@@ -203,7 +234,7 @@ User-Agent: wonderbra 1.0") }}
<div class="field-group" id="notification-field-group"> <div class="field-group" id="notification-field-group">
{% if has_default_notification_urls %} {% if has_default_notification_urls %}
<div class="inline-warning"> <div class="inline-warning">
<img class="inline-warning-icon" src="{{url_for('static_content', group='images', filename='notice.svg')}}" alt="Look out!" title="Lookout!"/> <img class="inline-warning-icon" src="{{url_for('static_content', group='images', filename='notice.svg')}}" alt="Look out!" title="Lookout!" >
There are <a href="{{ url_for('settings_page')}}#notifications">system-wide notification URLs enabled</a>, this form will override notification settings for this watch only &dash; an empty Notification URL list here will still send notifications. There are <a href="{{ url_for('settings_page')}}#notifications">system-wide notification URLs enabled</a>, this form will override notification settings for this watch only &dash; an empty Notification URL list here will still send notifications.
</div> </div>
{% endif %} {% endif %}
@@ -214,9 +245,10 @@ User-Agent: wonderbra 1.0") }}
</fieldset> </fieldset>
</div> </div>
{% if watch['processor'] == 'text_json_diff' %}
<div class="tab-pane-inner" id="filters-and-triggers"> <div class="tab-pane-inner" id="filters-and-triggers">
<div class="pure-control-group"> <div class="pure-control-group">
<strong>Pro-tips:</strong><br/> <strong>Pro-tips:</strong><br>
<ul> <ul>
<li> <li>
Use the preview page to see your filters and triggers highlighted. Use the preview page to see your filters and triggers highlighted.
@@ -226,12 +258,6 @@ User-Agent: wonderbra 1.0") }}
</li> </li>
</ul> </ul>
</div> </div>
<fieldset>
<div class="pure-control-group">
{{ render_checkbox_field(form.check_unique_lines) }}
<span class="pure-form-message-inline">Good for websites that just move the content around, and you want to know when NEW content is added, compares new lines against all history for this watch.</span>
</div>
</fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
{% set field = render_field(form.include_filters, {% set field = render_field(form.include_filters,
rows=5, rows=5,
@@ -241,9 +267,9 @@ xpath://body/div/span[contains(@class, 'example-class')]",
%} %}
{{ field }} {{ field }}
{% if '/text()' in field %} {% if '/text()' in field %}
<span class="pure-form-message-inline"><strong>Note!: //text() function does not work where the &lt;element&gt; contains &lt;![CDATA[]]&gt;</strong></span><br/> <span class="pure-form-message-inline"><strong>Note!: //text() function does not work where the &lt;element&gt; contains &lt;![CDATA[]]&gt;</strong></span><br>
{% endif %} {% endif %}
<span class="pure-form-message-inline">One rule per line, <i>any</i> rules that matches will be used.<br/> <span class="pure-form-message-inline">One rule per line, <i>any</i> rules that matches will be used.<br>
<ul> <ul>
<li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li> <li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li>
@@ -266,40 +292,42 @@ xpath://body/div/span[contains(@class, 'example-class')]",
</li> </li>
</ul> </ul>
Please be sure that you thoroughly understand how to write CSS, JSONPath, XPath{% if jq_support %}, or jq selector{%endif%} rules before filing an issue on GitHub! <a Please be sure that you thoroughly understand how to write CSS, JSONPath, XPath{% if jq_support %}, or jq selector{%endif%} rules before filing an issue on GitHub! <a
href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">here for more CSS selector help</a>.<br/> href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">here for more CSS selector help</a>.<br>
</span> </span>
</div> </div>
<div class="pure-control-group"> <fieldset class="pure-control-group">
{{ render_field(form.subtractive_selectors, rows=5, placeholder="header {{ render_field(form.subtractive_selectors, rows=5, placeholder="header
footer footer
nav nav
.stockticker") }} .stockticker") }}
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
<ul> <ul>
<li> Remove HTML element(s) by CSS selector before text conversion. </li> <li> Remove HTML element(s) by CSS selector before text conversion. </li>
<li> Add multiple elements or CSS selectors per line to ignore multiple parts of the HTML. </li> <li> Add multiple elements or CSS selectors per line to ignore multiple parts of the HTML. </li>
</ul> </ul>
</span> </span>
</div> </fieldset>
<fieldset class="pure-group"> <div class="text-filtering">
{{ render_field(form.ignore_text, rows=5, placeholder="Some text to ignore in a line <fieldset class="pure-group" id="text-filtering-type-options">
/some.regex\d{2}/ for case-INsensitive regex <h3>Text filtering</h3>
") }} Limit trigger/ignore/block/extract to;<br>
<span class="pure-form-message-inline"> {{ render_checkbox_field(form.filter_text_added) }}
<ul> {{ render_checkbox_field(form.filter_text_replaced) }}
<li>Each line processed separately, any line matching will be ignored (removed before creating the checksum)</li> {{ render_checkbox_field(form.filter_text_removed) }}
<li>Regular Expression support, wrap the entire line in forward slash <code>/regex/</code></li> <span class="pure-form-message-inline">Note: Depending on the length and similarity of the text on each line, the algorithm may consider an <strong>addition</strong> instead of <strong>replacement</strong> for example.</span>
<li>Changing this will affect the comparison checksum which may trigger an alert</li> <span class="pure-form-message-inline">So it's always better to select <strong>Added</strong>+<strong>Replaced</strong> when you're interested in new content.</span><br>
<li>Use the preview/show current tab to see ignores</li> <span class="pure-form-message-inline">When content is merely moved in a list, it will also trigger an <strong>addition</strong>, consider enabling <code><strong>Only trigger when unique lines appear</strong></code></span>
</ul> </fieldset>
</span>
</fieldset> <fieldset class="pure-control-group">
{{ render_checkbox_field(form.check_unique_lines) }}
<span class="pure-form-message-inline">Good for websites that just move the content around, and you want to know when NEW content is added, compares new lines against all history for this watch.</span>
</fieldset>
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.trigger_text, rows=5, placeholder="Some text to wait for in a line {{ render_field(form.trigger_text, rows=5, placeholder="Some text to wait for in a line
/some.regex\d{2}/ for case-INsensitive regex /some.regex\d{2}/ for case-INsensitive regex
") }} ") }}
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
<ul> <ul>
<li>Text to wait for before triggering a change/notification, all text and regex are tested <i>case-insensitive</i>.</li> <li>Text to wait for before triggering a change/notification, all text and regex are tested <i>case-insensitive</i>.</li>
@@ -310,6 +338,21 @@ nav
</span> </span>
</div> </div>
</fieldset> </fieldset>
<fieldset class="pure-group">
{{ render_field(form.ignore_text, rows=5, placeholder="Some text to ignore in a line
/some.regex\d{2}/ for case-INsensitive regex
") }}
<span class="pure-form-message-inline">
<ul>
<li>Each line processed separately, any line matching will be ignored (removed before creating the checksum)</li>
<li>Regular Expression support, wrap the entire line in forward slash <code>/regex/</code></li>
<li>Changing this will affect the comparison checksum which may trigger an alert</li>
<li>Use the preview/show current tab to see ignores</li>
</ul>
</span>
</fieldset>
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.text_should_not_be_present, rows=5, placeholder="For example: Out of stock {{ render_field(form.text_should_not_be_present, rows=5, placeholder="For example: Out of stock
@@ -334,7 +377,7 @@ Unavailable") }}
<li>Extracts text in the final output (line by line) after other filters using regular expressions; <li>Extracts text in the final output (line by line) after other filters using regular expressions;
<ul> <ul>
<li>Regular expression &dash; example <code>/reports.+?2022/i</code></li> <li>Regular expression &dash; example <code>/reports.+?2022/i</code></li>
<li>Use <code>//(?aiLmsux))</code> type flags (more <a href="https://docs.python.org/3/library/re.html#index-15">information here</a>)<br/></li> <li>Use <code>//(?aiLmsux))</code> type flags (more <a href="https://docs.python.org/3/library/re.html#index-15">information here</a>)<br></li>
<li>Keyword example &dash; example <code>Out of stock</code></li> <li>Keyword example &dash; example <code>Out of stock</code></li>
<li>Use groups to extract just that text &dash; example <code>/reports.+?(\d+)/i</code> returns a list of years only</li> <li>Use groups to extract just that text &dash; example <code>/reports.+?(\d+)/i</code> returns a list of years only</li>
</ul> </ul>
@@ -344,16 +387,30 @@ Unavailable") }}
</span> </span>
</div> </div>
</fieldset> </fieldset>
</div>
</div> </div>
{% endif %}
{% if watch['processor'] == 'restock_diff' %}
<div class="tab-pane-inner" id="restock">
<fieldset>
<div class="pure-control-group">
{{ render_checkbox_field(form.in_stock_only) }}
<span class="pure-form-message-inline">Only trigger notifications when page changes from <strong>out of stock</strong> to <strong>back in stock</strong></span>
</div>
</fieldset>
</div>
{% endif %}
{% if watch['processor'] == 'text_json_diff' %}
<div class="tab-pane-inner visual-selector-ui" id="visualselector"> <div class="tab-pane-inner visual-selector-ui" id="visualselector">
<img class="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}"> <img class="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}" alt="New beta functionality">
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
{% if visualselector_enabled %} {% if visualselector_enabled %}
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
The Visual Selector tool lets you select the <i>text</i> elements that will be used for the change detection &dash; after the <i>Browser Steps</i> has completed.<br/><br/> The Visual Selector tool lets you select the <i>text</i> elements that will be used for the change detection &dash; after the <i>Browser Steps</i> has completed.<br><br>
</span> </span>
<div id="selector-header"> <div id="selector-header">
@@ -364,7 +421,7 @@ Unavailable") }}
<!-- request the screenshot and get the element offset info ready --> <!-- request the screenshot and get the element offset info ready -->
<!-- use img src ready load to know everything is ready to map out --> <!-- use img src ready load to know everything is ready to map out -->
<!-- @todo: maybe something interesting like a field to select 'elements that contain text... and their parents n' --> <!-- @todo: maybe something interesting like a field to select 'elements that contain text... and their parents n' -->
<img id="selector-background" /> <img id="selector-background" >
<canvas id="selector-canvas"></canvas> <canvas id="selector-canvas"></canvas>
</div> </div>
<div id="selector-current-xpath" style="overflow-x: hidden"><strong>Currently:</strong>&nbsp;<span class="text">Loading...</span></div> <div id="selector-current-xpath" style="overflow-x: hidden"><strong>Currently:</strong>&nbsp;<span class="text">Loading...</span></div>
@@ -378,6 +435,7 @@ Unavailable") }}
</div> </div>
</fieldset> </fieldset>
</div> </div>
{% endif %}
<div id="actions"> <div id="actions">
<div class="pure-control-group"> <div class="pure-control-group">

View File

@@ -1,6 +1,7 @@
{% extends 'base.html' %} {% extends 'base.html' %}
{% block content %} {% block content %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> {% from '_helpers.jinja' import render_field %}
<script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="edit-form monospaced-textarea"> <div class="edit-form monospaced-textarea">
<div class="tabs collapsable"> <div class="tabs collapsable">
@@ -12,9 +13,8 @@
<div class="box-wrap inner"> <div class="box-wrap inner">
<form class="pure-form pure-form-aligned" action="{{url_for('import_page')}}" method="POST"> <form class="pure-form pure-form-aligned" action="{{url_for('import_page')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="tab-pane-inner" id="url-list"> <div class="tab-pane-inner" id="url-list">
<fieldset class="pure-group">
<legend> <legend>
Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma
(,): (,):
@@ -23,7 +23,7 @@
<br> <br>
URLs which do not pass validation will stay in the textarea. URLs which do not pass validation will stay in the textarea.
</legend> </legend>
{{ render_field(form.processor, class="processor") }}
<textarea name="urls" class="pure-input-1-2" placeholder="https://" <textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%; style="width: 100%;
@@ -31,22 +31,24 @@
white-space: pre; white-space: pre;
overflow-wrap: normal; overflow-wrap: normal;
overflow-x: scroll;" rows="25">{{ import_url_list_remaining }}</textarea> overflow-x: scroll;" rows="25">{{ import_url_list_remaining }}</textarea>
</fieldset>
<div id="quick-watch-processor-type">
</div>
</div> </div>
<div class="tab-pane-inner" id="distill-io"> <div class="tab-pane-inner" id="distill-io">
<fieldset class="pure-group">
<legend> <legend>
Copy and Paste your Distill.io watch 'export' file, this should be a JSON file.</br> Copy and Paste your Distill.io watch 'export' file, this should be a JSON file.<br>
This is <i>experimental</i>, supported fields are <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, the rest (including <code>schedule</code>) are ignored. This is <i>experimental</i>, supported fields are <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, the rest (including <code>schedule</code>) are ignored.
<br/> <br>
<p> <p>
How to export? <a href="https://distill.io/docs/web-monitor/how-export-and-import-monitors/">https://distill.io/docs/web-monitor/how-export-and-import-monitors/</a><br/> How to export? <a href="https://distill.io/docs/web-monitor/how-export-and-import-monitors/">https://distill.io/docs/web-monitor/how-export-and-import-monitors/</a><br>
Be sure to set your default fetcher to Chrome if required.</br> Be sure to set your default fetcher to Chrome if required.<br>
</p> </p>
</legend> </legend>
@@ -75,7 +77,7 @@
] ]
} }
" rows="25">{{ original_distill_json }}</textarea> " rows="25">{{ original_distill_json }}</textarea>
</fieldset>
</div> </div>
<button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button> <button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button>
</form> </form>

View File

@@ -4,13 +4,13 @@
<div class="login-form"> <div class="login-form">
<div class="inner"> <div class="inner">
<form class="pure-form pure-form-stacked" action="{{url_for('login')}}" method="POST"> <form class="pure-form pure-form-stacked" action="{{url_for('login')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
<label for="password">Password</label> <label for="password">Password</label>
<input type="password" id="password" required="" name="password" value="" <input type="password" id="password" required="" name="password" value=""
size="15" autofocus /> size="15" autofocus />
<input type="hidden" id="email" name="email" value="defaultuser@changedetection.io" /> <input type="hidden" id="email" name="email" value="defaultuser@changedetection.io" >
</div> </div>
<div class="pure-control-group"> <div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Login</button> <button type="submit" class="pure-button pure-button-primary">Login</button>

View File

@@ -7,9 +7,9 @@
const error_screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid, error_screenshot=1) }}"; const error_screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid, error_screenshot=1) }}";
{% endif %} {% endif %}
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="tabs"> <div class="tabs">
<ul> <ul>
{% if last_error_text %}<li class="tab" id="error-text-tab"><a href="#error-text">Error Text</a></li> {% endif %} {% if last_error_text %}<li class="tab" id="error-text-tab"><a href="#error-text">Error Text</a></li> {% endif %}
@@ -31,7 +31,7 @@
<div class="tab-pane-inner" id="error-screenshot"> <div class="tab-pane-inner" id="error-screenshot">
<div class="snapshot-age error">{{watch.snapshot_error_screenshot_ctime|format_seconds_ago}} seconds ago</div> <div class="snapshot-age error">{{watch.snapshot_error_screenshot_ctime|format_seconds_ago}} seconds ago</div>
<img id="error-screenshot-img" style="max-width: 80%" alt="Current erroring screenshot from most recent request"/> <img id="error-screenshot-img" style="max-width: 80%" alt="Current erroring screenshot from most recent request" >
</div> </div>
<div class="tab-pane-inner" id="text"> <div class="tab-pane-inner" id="text">
@@ -54,11 +54,11 @@
<div class="tip"> <div class="tip">
For now, Differences are performed on text, not graphically, only the latest screenshot is available. For now, Differences are performed on text, not graphically, only the latest screenshot is available.
</div> </div>
</br> <br>
{% if is_html_webdriver %} {% if is_html_webdriver %}
{% if screenshot %} {% if screenshot %}
<div class="snapshot-age">{{watch.snapshot_screenshot_ctime|format_timestamp_timeago}}</div> <div class="snapshot-age">{{watch.snapshot_screenshot_ctime|format_timestamp_timeago}}</div>
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/> <img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request" >
{% else %} {% else %}
No screenshot available just yet! Try rechecking the page. No screenshot available just yet! Try rechecking the page.
{% endif %} {% endif %}
@@ -67,4 +67,4 @@
{% endif %} {% endif %}
</div> </div>
</div> </div>
{% endblock %} {% endblock %}

View File

@@ -9,10 +9,10 @@
const email_notification_prefix=JSON.parse('{{emailprefix|tojson}}'); const email_notification_prefix=JSON.parse('{{emailprefix|tojson}}');
{% endif %} {% endif %}
</script> </script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='global-settings.js')}}" defer></script> <script src="{{url_for('static_content', group='js', filename='global-settings.js')}}" defer></script>
<div class="edit-form"> <div class="edit-form">
<div class="tabs collapsable"> <div class="tabs collapsable">
<ul> <ul>
@@ -21,11 +21,12 @@
<li class="tab"><a href="#fetching">Fetching</a></li> <li class="tab"><a href="#fetching">Fetching</a></li>
<li class="tab"><a href="#filters">Global Filters</a></li> <li class="tab"><a href="#filters">Global Filters</a></li>
<li class="tab"><a href="#api">API</a></li> <li class="tab"><a href="#api">API</a></li>
<li class="tab"><a href="#proxies">CAPTCHA &amp; Proxies</a></li>
</ul> </ul>
</div> </div>
<div class="box-wrap inner"> <div class="box-wrap inner">
<form class="pure-form pure-form-stacked settings" action="{{url_for('settings_page')}}" method="POST"> <form class="pure-form pure-form-stacked settings" action="{{url_for('settings_page')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
<div class="tab-pane-inner" id="general"> <div class="tab-pane-inner" id="general">
<fieldset> <fieldset>
<div class="pure-control-group"> <div class="pure-control-group">
@@ -39,7 +40,7 @@
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.application.form.filter_failure_notification_threshold_attempts, class="filter_failure_notification_threshold_attempts") }} {{ render_field(form.application.form.filter_failure_notification_threshold_attempts, class="filter_failure_notification_threshold_attempts") }}
<span class="pure-form-message-inline">After this many consecutive times that the CSS/xPath filter is missing, send a notification <span class="pure-form-message-inline">After this many consecutive times that the CSS/xPath filter is missing, send a notification
<br/> <br>
Set to <strong>0</strong> to disable Set to <strong>0</strong> to disable
</span> </span>
</div> </div>
@@ -56,14 +57,23 @@
{% endif %} {% endif %}
</div> </div>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.shared_diff_access, class="shared_diff_access") }}
<span class="pure-form-message-inline">Allow access to view watch diff page when password is enabled (Good for sharing the diff page)
</span>
</div>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_field(form.application.form.base_url, placeholder="http://yoursite.com:5000/", {{ render_field(form.application.form.base_url, placeholder="http://yoursite.com:5000/",
class="m-d") }} class="m-d") }}
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
Base URL used for the <code>{{ '{{ base_url }}' }}</code> token in notifications and RSS links.<br/>Default value is the ENV var 'BASE_URL' (Currently "{{settings_application['current_base_url']}}"), Base URL used for the <code>{{ '{{ base_url }}' }}</code> token in notifications and RSS links.<br>Default value is the ENV var 'BASE_URL' (Currently "{{settings_application['current_base_url']}}"),
<a href="https://github.com/dgtlmoon/changedetection.io/wiki/Configurable-BASE_URL-setting">read more here</a>. <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Configurable-BASE_URL-setting">read more here</a>.
</span> </span>
</div> </div>
<div class="pure-control-group">
{{ render_field(form.application.form.pager_size) }}
<span class="pure-form-message-inline">Number of items per page in the watch overview list, 0 to disable.</span>
</div>
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_checkbox_field(form.application.form.extract_title_as_title) }} {{ render_checkbox_field(form.application.form.extract_title_as_title) }}
@@ -99,13 +109,13 @@
<p>Use the <strong>Basic</strong> method (default) where your watched sites don't need Javascript to render.</p> <p>Use the <strong>Basic</strong> method (default) where your watched sites don't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p> <p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p>
</span> </span>
<br/> <br>
Tip: <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#brightdata-proxy-support">Connect using BrightData Proxies, find out more here.</a> Tip: <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#brightdata-proxy-support">Connect using BrightData Proxies, find out more here.</a>
</div> </div>
<fieldset class="pure-group" id="webdriver-override-options"> <fieldset class="pure-group" id="webdriver-override-options">
<div class="pure-form-message-inline"> <div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong> <strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/> <br>
This will wait <i>n</i> seconds before extracting the text. This will wait <i>n</i> seconds before extracting the text.
</div> </div>
<div class="pure-control-group"> <div class="pure-control-group">
@@ -118,14 +128,14 @@
<fieldset class="pure-group"> <fieldset class="pure-group">
{{ render_checkbox_field(form.application.form.ignore_whitespace) }} {{ render_checkbox_field(form.application.form.ignore_whitespace) }}
<span class="pure-form-message-inline">Ignore whitespace, tabs and new-lines/line-feeds when considering if a change was detected.<br/> <span class="pure-form-message-inline">Ignore whitespace, tabs and new-lines/line-feeds when considering if a change was detected.<br>
<i>Note:</i> Changing this will change the status of your existing watches, possibly trigger alerts etc. <i>Note:</i> Changing this will change the status of your existing watches, possibly trigger alerts etc.
</span> </span>
</fieldset> </fieldset>
<fieldset class="pure-group"> <fieldset class="pure-group">
{{ render_checkbox_field(form.application.form.render_anchor_tag_content) }} {{ render_checkbox_field(form.application.form.render_anchor_tag_content) }}
<span class="pure-form-message-inline">Render anchor tag content, default disabled, when enabled renders links as <code>(link text)[https://somesite.com]</code> <span class="pure-form-message-inline">Render anchor tag content, default disabled, when enabled renders links as <code>(link text)[https://somesite.com]</code>
<br/> <br>
<i>Note:</i> Changing this could affect the content of your existing watches, possibly trigger alerts etc. <i>Note:</i> Changing this could affect the content of your existing watches, possibly trigger alerts etc.
</span> </span>
</fieldset> </fieldset>
@@ -145,7 +155,7 @@ nav
{{ render_field(form.application.form.global_ignore_text, rows=5, placeholder="Some text to ignore in a line {{ render_field(form.application.form.global_ignore_text, rows=5, placeholder="Some text to ignore in a line
/some.regex\d{2}/ for case-INsensitive regex /some.regex\d{2}/ for case-INsensitive regex
") }} ") }}
<span class="pure-form-message-inline">Note: This is applied globally in addition to the per-watch rules.</span><br/> <span class="pure-form-message-inline">Note: This is applied globally in addition to the per-watch rules.</span><br>
<span class="pure-form-message-inline"> <span class="pure-form-message-inline">
<ul> <ul>
<li>Note: This is applied globally in addition to the per-watch rules.</li> <li>Note: This is applied globally in addition to the per-watch rules.</li>
@@ -164,20 +174,35 @@ nav
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_checkbox_field(form.application.form.api_access_token_enabled) }} {{ render_checkbox_field(form.application.form.api_access_token_enabled) }}
<div class="pure-form-message-inline">Restrict API access limit by using <code>x-api-key</code> header</div><br/> <div class="pure-form-message-inline">Restrict API access limit by using <code>x-api-key</code> header</div><br>
<div class="pure-form-message-inline"><br/>API Key <span id="api-key">{{api_key}}</span> <div class="pure-form-message-inline"><br>API Key <span id="api-key">{{api_key}}</span>
<span style="display:none;" id="api-key-copy" >copy</span> <span style="display:none;" id="api-key-copy" >copy</span>
</div> </div>
</div> </div>
</div> </div>
<div class="tab-pane-inner" id="proxies">
<p><strong>Tip</strong>: You can connect to websites using <a href="https://brightdata.grsm.io/n0r16zf7eivq">BrightData</a> proxies, their service <strong>WebUnlocker</strong> will solve most CAPTCHAs, whilst their <strong>Residential Proxies</strong> may help to avoid CAPTCHA altogether. </p>
<p>It may be easier to try <strong>WebUnlocker</strong> first, WebUnlocker also supports country selection.</p>
<p>
When you have <a href="https://brightdata.grsm.io/n0r16zf7eivq">registered</a>, enabled the required services, visit the <A href="https://brightdata.com/cp/api_example?">API example page</A>, then select <strong>Python</strong>, set the country you wish to use, then copy+paste the example URL below<br>
The Proxy URL with BrightData should start with <code>http://brd-customer...</code>
</p>
<p>When you sign up using <a href="https://brightdata.grsm.io/n0r16zf7eivq">https://brightdata.grsm.io/n0r16zf7eivq</a> BrightData will match any first deposit up to $150</p>
<div class="pure-control-group">
{{ render_field(form.requests.form.extra_proxies) }}
<span class="pure-form-message-inline">"Name" will be used for selecting the proxy in the Watch Edit settings</span>
</div>
</div>
<div id="actions"> <div id="actions">
<div class="pure-control-group"> <div class="pure-control-group">
{{ render_button(form.save_button) }} {{ render_button(form.save_button) }}
<a href="{{url_for('index')}}" class="pure-button button-small button-cancel">Back</a> <a href="{{url_for('index')}}" class="pure-button button-small button-cancel">Back</a>
<a href="{{url_for('clear_all_history')}}" class="pure-button button-small button-cancel">Clear Snapshot History</a> <a href="{{url_for('clear_all_history')}}" class="pure-button button-small button-cancel">Clear Snapshot History</a>
</div> </div>
</div> </div>
</form> </form>
</div> </div>

View File

@@ -0,0 +1 @@
<?xml version="1.0" encoding="utf-8"?><svg version="1.1" id="Layer_1" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink" viewBox="0 0 122.879 119.799" enable-background="new 0 0 122.879 119.799" xml:space="preserve"><g><path d="M49.988,0h0.016v0.007C63.803,0.011,76.298,5.608,85.34,14.652c9.027,9.031,14.619,21.515,14.628,35.303h0.007v0.033v0.04 h-0.007c-0.005,5.557-0.917,10.905-2.594,15.892c-0.281,0.837-0.575,1.641-0.877,2.409v0.007c-1.446,3.66-3.315,7.12-5.547,10.307 l29.082,26.139l0.018,0.016l0.157,0.146l0.011,0.011c1.642,1.563,2.536,3.656,2.649,5.78c0.11,2.1-0.543,4.248-1.979,5.971 l-0.011,0.016l-0.175,0.203l-0.035,0.035l-0.146,0.16l-0.016,0.021c-1.565,1.642-3.654,2.534-5.78,2.646 c-2.097,0.111-4.247-0.54-5.971-1.978l-0.015-0.011l-0.204-0.175l-0.029-0.024L78.761,90.865c-0.88,0.62-1.778,1.209-2.687,1.765 c-1.233,0.755-2.51,1.466-3.813,2.115c-6.699,3.342-14.269,5.222-22.272,5.222v0.007h-0.016v-0.007 c-13.799-0.004-26.296-5.601-35.338-14.645C5.605,76.291,0.016,63.805,0.007,50.021H0v-0.033v-0.016h0.007 c0.004-13.799,5.601-26.296,14.645-35.338C23.683,5.608,36.167,0.016,49.955,0.007V0H49.988L49.988,0z M50.004,11.21v0.007h-0.016 h-0.033V11.21c-10.686,0.007-20.372,4.35-27.384,11.359C15.56,29.578,11.213,39.274,11.21,49.973h0.007v0.016v0.033H11.21 c0.007,10.686,4.347,20.367,11.359,27.381c7.009,7.012,16.705,11.359,27.403,11.361v-0.007h0.016h0.033v0.007 c10.686-0.007,20.368-4.348,27.382-11.359c7.011-7.009,11.358-16.702,11.36-27.4h-0.006v-0.016v-0.033h0.006 c-0.006-10.686-4.35-20.372-11.358-27.384C70.396,15.56,60.703,11.213,50.004,11.21L50.004,11.21z"/></g></svg>

After

Width:  |  Height:  |  Size: 1.6 KiB

View File

@@ -1,14 +1,13 @@
{% extends 'base.html' %} {% extends 'base.html' %}
{% block content %} {% block content %}
{% from '_helpers.jinja' import render_simple_field, render_field %} {% from '_helpers.jinja' import render_simple_field, render_field %}
{% from '_pagination.jinja' import pagination %} <script src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script> <script src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
<div class="box"> <div class="box">
<form class="pure-form" action="{{ url_for('form_quick_watch_add') }}" method="POST" id="new-watch-form"> <form class="pure-form" action="{{ url_for('form_quick_watch_add') }}" method="POST" id="new-watch-form">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
<fieldset> <fieldset>
<legend>Add a new change detection watch</legend> <legend>Add a new change detection watch</legend>
<div id="watch-add-wrapper-zone"> <div id="watch-add-wrapper-zone">
@@ -21,20 +20,31 @@
{{ render_simple_field(form.edit_and_watch_submit_button, title="Edit first then Watch") }} {{ render_simple_field(form.edit_and_watch_submit_button, title="Edit first then Watch") }}
</div> </div>
</div> </div>
<div id="quick-watch-processor-type">
{{ render_simple_field(form.processor, title="Edit first then Watch") }}
</div>
</fieldset> </fieldset>
<span style="color:#eee; font-size: 80%;"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread-white.svg')}}" /> Tip: You can also add 'shared' watches. <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">More info</a></a></span> <span style="color:#eee; font-size: 80%;"><img alt="Create a shareable link" style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread-white.svg')}}" > Tip: You can also add 'shared' watches. <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">More info</a></span>
</form> </form>
<form class="pure-form" action="{{ url_for('form_watch_list_checkbox_operations') }}" method="POST" id="watch-list-form"> <form class="pure-form" action="{{ url_for('form_watch_list_checkbox_operations') }}" method="POST" id="watch-list-form">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/> <input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
<div id="checkbox-operations"> <div id="checkbox-operations">
<button class="pure-button button-secondary button-xsmall" style="font-size: 70%" name="op" value="pause">Pause</button> <button class="pure-button button-secondary button-xsmall" name="op" value="pause">Pause</button>
<button class="pure-button button-secondary button-xsmall" style="font-size: 70%" name="op" value="unpause">UnPause</button> <button class="pure-button button-secondary button-xsmall" name="op" value="unpause">UnPause</button>
<button class="pure-button button-secondary button-xsmall" style="font-size: 70%" name="op" value="mute">Mute</button> <button class="pure-button button-secondary button-xsmall" name="op" value="mute">Mute</button>
<button class="pure-button button-secondary button-xsmall" style="font-size: 70%" name="op" value="unmute">UnMute</button> <button class="pure-button button-secondary button-xsmall" name="op" value="unmute">UnMute</button>
<button class="pure-button button-secondary button-xsmall" style="font-size: 70%" name="op" value="notification-default">Use default notification</button> <button class="pure-button button-secondary button-xsmall" name="op" value="recheck">Recheck</button>
<button class="pure-button button-secondary button-xsmall" style="background: #dd4242; font-size: 70%" name="op" value="delete">Delete</button> <button class="pure-button button-secondary button-xsmall" name="op" value="mark-viewed">Mark viewed</button>
<button class="pure-button button-secondary button-xsmall" name="op" value="notification-default">Use default notification</button>
<button class="pure-button button-secondary button-xsmall" style="background: #dd4242;" name="op" value="clear-history">Clear/reset history</button>
<button class="pure-button button-secondary button-xsmall" style="background: #dd4242;" name="op" value="delete">Delete</button>
</div> </div>
{% if watches|length >= pagination.per_page %}
{{ pagination.info }}
{% endif %}
{% if search_q %}<div id="search-result-info">Searching "<strong><i>{{search_q}}</i></strong>"</div>{% endif %}
<div> <div>
<a href="{{url_for('index')}}" class="pure-button button-tag {{'active' if not active_tag }}">All</a> <a href="{{url_for('index')}}" class="pure-button button-tag {{'active' if not active_tag }}">All</a>
{% for tag in tags %} {% for tag in tags %}
@@ -44,60 +54,95 @@
{% endfor %} {% endfor %}
</div> </div>
{% set sort_order = request.args.get('order', 'asc') == 'asc' %} {% set sort_order = sort_order or 'asc' %}
{% set sort_attribute = request.args.get('sort', 'last_changed') %} {% set sort_attribute = sort_attribute or 'last_changed' %}
{% set pagination_page = request.args.get('page', 0) %} {% set pagination_page = request.args.get('page', 0) %}
<div id="watch-table-wrapper"> <div id="watch-table-wrapper">
<table class="pure-table pure-table-striped watch-table"> <table class="pure-table pure-table-striped watch-table">
<thead> <thead>
<tr> <tr>
<th><input style="vertical-align: middle" type="checkbox" id="check-all"/> #</th> {% set link_order = "desc" if sort_order == 'asc' else "asc" %}
<th></th>
{% set link_order = "desc" if sort_order else "asc" %}
{% set arrow_span = "" %} {% set arrow_span = "" %}
<th><a class="{{ 'active '+link_order if sort_attribute == 'label' else 'inactive' }}" href="{{url_for('index', sort='label', order=link_order)}}">Website <span class='arrow {{link_order}}'></span></a></th> <th><input style="vertical-align: middle" type="checkbox" id="check-all" > <a class="{{ 'active '+link_order if sort_attribute == 'date_created' else 'inactive' }}" href="{{url_for('index', sort='date_created', order=link_order, tag=active_tag)}}"># <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_checked' else 'inactive' }}" href="{{url_for('index', sort='last_checked', order=link_order)}}">Last Checked <span class='arrow {{link_order}}'></span></a></th> <th></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_changed' else 'inactive' }}" href="{{url_for('index', sort='last_changed', order=link_order)}}">Last Changed <span class='arrow {{link_order}}'></span></a></th> <th><a class="{{ 'active '+link_order if sort_attribute == 'label' else 'inactive' }}" href="{{url_for('index', sort='label', order=link_order, tag=active_tag)}}">Website <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_checked' else 'inactive' }}" href="{{url_for('index', sort='last_checked', order=link_order, tag=active_tag)}}">Last Checked <span class='arrow {{link_order}}'></span></a></th>
<th><a class="{{ 'active '+link_order if sort_attribute == 'last_changed' else 'inactive' }}" href="{{url_for('index', sort='last_changed', order=link_order, tag=active_tag)}}">Last Changed <span class='arrow {{link_order}}'></span></a></th>
<th></th> <th></th>
</tr> </tr>
</thead> </thead>
<tbody> <tbody>
{% if not watches|length %}
{% set sorted_watches = watches|sort(attribute=sort_attribute, reverse=sort_order) %} <tr>
{% for watch in sorted_watches %} <td colspan="6">No website watches configured, please add a URL in the box above, or <a href="{{ url_for('import_page')}}" >import a list</a>.</td>
</tr>
{# WIP for pagination, disabled for now {% endif %}
{% if not ( loop.index >= 3 and loop.index <=4) %}{% continue %}{% endif %} --> {% for watch in (watches|sort(attribute=sort_attribute, reverse=sort_order == 'asc'))|pagination_slice(skip=pagination.skip) %}
#}
<tr id="{{ watch.uuid }}" <tr id="{{ watch.uuid }}"
class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }} class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }} processor-{{ watch['processor'] }}
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %} {% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %} {% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %}
{% if watch.paused is defined and watch.paused != False %}paused{% endif %} {% if watch.paused is defined and watch.paused != False %}paused{% endif %}
{% if watch.newest_history_key| int > watch.last_viewed and watch.history_n>=2 %}unviewed{% endif %} {% if watch.newest_history_key| int > watch.last_viewed and watch.history_n>=2 %}unviewed{% endif %}
{% if watch.uuid in queued_uuids %}queued{% endif %}"> {% if watch.uuid in queued_uuids %}queued{% endif %}">
<td class="inline checkbox-uuid" ><input name="uuids" type="checkbox" value="{{ watch.uuid}} "/> <span>{{ loop.index }}</span></td> <td class="inline checkbox-uuid" ><input name="uuids" type="checkbox" value="{{ watch.uuid}} " > <span>{{ loop.index+pagination.skip }}</span></td>
<td class="inline watch-controls"> <td class="inline watch-controls">
{% if not watch.paused %} {% if not watch.paused %}
<a class="state-off" href="{{url_for('index', op='pause', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause checks" title="Pause checks" class="icon icon-pause"/></a> <a class="state-off" href="{{url_for('index', op='pause', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause checks" title="Pause checks" class="icon icon-pause" ></a>
{% else %} {% else %}
<a class="state-on" href="{{url_for('index', op='pause', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='play.svg')}}" alt="UnPause checks" title="UnPause checks" class="icon icon-unpause"/></a> <a class="state-on" href="{{url_for('index', op='pause', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='play.svg')}}" alt="UnPause checks" title="UnPause checks" class="icon icon-unpause" ></a>
{% endif %} {% endif %}
<a class="link-mute state-{{'on' if watch.notification_muted else 'off'}}" href="{{url_for('index', op='mute', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='bell-off.svg')}}" alt="Mute notifications" title="Mute notifications" class="icon icon-mute"/></a> <a class="link-mute state-{{'on' if watch.notification_muted else 'off'}}" href="{{url_for('index', op='mute', uuid=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='bell-off.svg')}}" alt="Mute notifications" title="Mute notifications" class="icon icon-mute" ></a>
</td> </td>
<td class="title-col inline">{{watch.title if watch.title is not none and watch.title|length > 0 else watch.url}} <td class="title-col inline">{{watch.title if watch.title is not none and watch.title|length > 0 else watch.url}}
<a class="external" target="_blank" rel="noopener" href="{{ watch.link.replace('source:','') }}"></a> <a class="external" target="_blank" rel="noopener" href="{{ watch.link.replace('source:','') }}"></a>
<a class="link-spread" href="{{url_for('form_share_put_watch', uuid=watch.uuid)}}"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread.svg')}}" class="icon icon-spread" /></a> <a class="link-spread" href="{{url_for('form_share_put_watch', uuid=watch.uuid)}}"><img src="{{url_for('static_content', group='images', filename='spread.svg')}}" class="status-icon icon icon-spread" title="Create a link to share watch config with others" ></a>
{%if watch.fetch_backend == "html_webdriver" %}<img style="height: 1em; display:inline-block;" src="{{url_for('static_content', group='images', filename='Google-Chrome-icon.png')}}" />{% endif %} {% if watch.get_fetch_backend == "html_webdriver"
or ( watch.get_fetch_backend == "system" and system_default_fetcher == 'html_webdriver' )
%}
<img class="status-icon" src="{{url_for('static_content', group='images', filename='Google-Chrome-icon.png')}}" title="Using a chrome browser" >
{% endif %}
{%if watch.is_pdf %}<img class="status-icon" src="{{url_for('static_content', group='images', filename='pdf-icon.svg')}}" title="Converting PDF to text" >{% endif %}
{% if watch.last_error is defined and watch.last_error != False %} {% if watch.last_error is defined and watch.last_error != False %}
<div class="fetch-error">{{ watch.last_error }}</div> <div class="fetch-error">{{ watch.last_error }}
{% if '403' in watch.last_error %}
{% if has_proxies %}
<a href="{{ url_for('settings_page', uuid=watch.uuid) }}#proxies">Try other proxies/location</a>&nbsp;
{% endif %}
<a href="{{ url_for('settings_page', uuid=watch.uuid) }}#proxies">Try adding external proxies/locations</a>
{% endif %}
</div>
{% endif %} {% endif %}
{% if watch.last_notification_error is defined and watch.last_notification_error != False %} {% if watch.last_notification_error is defined and watch.last_notification_error != False %}
<div class="fetch-error notification-error"><a href="{{url_for('notification_logs')}}">{{ watch.last_notification_error }}</a></div> <div class="fetch-error notification-error"><a href="{{url_for('notification_logs')}}">{{ watch.last_notification_error }}</a></div>
{% endif %} {% endif %}
{% if watch['processor'] == 'text_json_diff' %}
{% if watch['has_ldjson_price_data'] and not watch['track_ldjson_price_data'] %}
<div class="ldjson-price-track-offer">Embedded price data detected, follow only price data? <a href="{{url_for('price_data_follower.accept', uuid=watch.uuid)}}" class="pure-button button-xsmall">Yes</a> <a href="{{url_for('price_data_follower.reject', uuid=watch.uuid)}}" class="">No</a></div>
{% endif %}
{% if watch['track_ldjson_price_data'] == 'accepted' %}
<span class="tracking-ldjson-price-data" title="Automatically following embedded price information"><img src="{{url_for('static_content', group='images', filename='price-tag-icon.svg')}}" class="status-icon price-follow-tag-icon" > Price</span>
{% endif %}
{% endif %}
{% if watch['processor'] == 'restock_diff' %}
<span class="restock-label {{'in-stock' if watch['in_stock'] else 'not-in-stock' }}" title="detecting restock conditions">
<!-- maybe some object watch['processor'][restock_diff] or.. -->
{% if watch['last_checked'] %}
{% if watch['in_stock'] %} In stock {% else %} Not in stock {% endif %}
{% else %}
Not yet checked
{% endif %}
</span>
{% endif %}
{% if not active_tag %} {% if not active_tag %}
<span class="watch-tag-list">{{ watch.tag}}</span> <span class="watch-tag-list">{{ watch.tag}}</span>
{% endif %} {% endif %}
@@ -139,10 +184,7 @@
<a href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}"><img alt="RSS Feed" id="feed-icon" src="{{url_for('static_content', group='images', filename='Generic_Feed-icon.svg')}}" height="15"></a> <a href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}"><img alt="RSS Feed" id="feed-icon" src="{{url_for('static_content', group='images', filename='Generic_Feed-icon.svg')}}" height="15"></a>
</li> </li>
</ul> </ul>
{# WIP for pagination, disabled for now {{ pagination.links }}
{{ pagination(sorted_watches,3, pagination_page) }}
#}
</div> </div>
</form> </form>
</div> </div>

View File

@@ -14,13 +14,16 @@ global app
def cleanup(datastore_path): def cleanup(datastore_path):
# Unlink test output files # Unlink test output files
files = ['output.txt', files = [
'url-watches.json', 'count.txt',
'secret.txt', 'endpoint-content.txt'
'notification.txt', 'headers.txt',
'count.txt', 'headers-testtag.txt',
'endpoint-content.txt' 'notification.txt',
] 'secret.txt',
'url-watches.json',
'output.txt',
]
for file in files: for file in files:
try: try:
os.unlink("{}/{}".format(datastore_path, file)) os.unlink("{}/{}".format(datastore_path, file))

View File

@@ -1,10 +1,10 @@
{ {
"proxy-one": { "proxy-one": {
"label": "One", "label": "Proxy One",
"url": "http://127.0.0.1:3128" "url": "http://squid-one:3128"
}, },
"proxy-two": { "proxy-two": {
"label": "two", "label": "Proxy Two",
"url": "http://127.0.0.1:3129" "url": "http://squid-two:3128"
} }
} }

View File

@@ -0,0 +1,48 @@
acl localnet src 0.0.0.1-0.255.255.255 # RFC 1122 "this" network (LAN)
acl localnet src 10.0.0.0/8 # RFC 1918 local private network (LAN)
acl localnet src 100.64.0.0/10 # RFC 6598 shared address space (CGN)
acl localnet src 169.254.0.0/16 # RFC 3927 link-local (directly plugged) machines
acl localnet src 172.16.0.0/12 # RFC 1918 local private network (LAN)
acl localnet src 192.168.0.0/16 # RFC 1918 local private network (LAN)
acl localnet src fc00::/7 # RFC 4193 local private network range
acl localnet src fe80::/10 # RFC 4291 link-local (directly plugged) machines
acl localnet src 159.65.224.174
acl SSL_ports port 443
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
#http_access allow localhost manager
http_access deny manager
#http_access allow localhost
#http_access allow localnet
auth_param basic program /usr/lib/squid3/basic_ncsa_auth /etc/squid3/passwords
auth_param basic realm proxy
acl authenticated proxy_auth REQUIRED
http_access allow authenticated
http_access deny all
http_port 3128
coredump_dir /var/spool/squid
refresh_pattern ^ftp: 1440 20% 10080
refresh_pattern ^gopher: 1440 0% 1440
refresh_pattern -i (/cgi-bin/|\?) 0 0% 0
refresh_pattern \/(Packages|Sources)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern \/Release(|\.gpg)$ 0 0% 0 refresh-ims
refresh_pattern \/InRelease$ 0 0% 0 refresh-ims
refresh_pattern \/(Translation-.*)(|\.bz2|\.gz|\.xz)$ 0 0% 0 refresh-ims
refresh_pattern . 0 20% 4320
logfile_rotate 0

View File

@@ -0,0 +1 @@
test:$apr1$xvhFolTA$E/kz5/Rw1ewcyaSUdwqZs.

View File

@@ -0,0 +1,50 @@
#!/usr/bin/python3
import time
from flask import url_for
from ..util import live_server_setup, wait_for_all_checks
# just make a request, we will grep in the docker logs to see it actually got called
def test_select_custom(client, live_server):
live_server_setup(live_server)
# Goto settings, add our custom one
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-ignore_whitespace": "y",
"application-fetch_backend": "html_requests",
"requests-extra_proxies-0-proxy_name": "custom-test-proxy",
# test:awesome is set in tests/proxy_list/squid-passwords.txt
"requests-extra_proxies-0-proxy_url": "http://test:awesome@squid-custom:3128",
},
follow_redirects=True
)
assert b"Settings updated." in res.data
res = client.post(
url_for("import_page"),
# Because a URL wont show in squid/proxy logs due it being SSLed
# Use plain HTTP or a specific domain-name here
data={"urls": "https://changedetection.io/CHANGELOG.txt"},
follow_redirects=True
)
assert b"1 Imported" in res.data
wait_for_all_checks(client)
res = client.get(url_for("index"))
assert b'Proxy Authentication Required' not in res.data
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# We should see something via proxy
assert b'<div class=""> - 0.' in res.data
#
# Now we should see the request in the container logs for "squid-squid-custom" because it will be the only default

View File

@@ -0,0 +1,2 @@
"""Tests for the app."""

View File

@@ -0,0 +1,3 @@
#!/usr/bin/python3
from .. import conftest

View File

@@ -0,0 +1,106 @@
#!/usr/bin/python3
import os
import time
from flask import url_for
from ..util import live_server_setup, wait_for_all_checks, extract_UUID_from_client
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
valid_notification_formats,
)
def set_original_response():
test_return_data = """<html>
<body>
Some initial text<br>
<p>Which is across multiple lines</p>
<br>
So let's see what happens. <br>
<div>price: $10.99</div>
<div id="sametext">Out of stock</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_back_in_stock_response():
test_return_data = """<html>
<body>
Some initial text<br>
<p>Which is across multiple lines</p>
<br>
So let's see what happens. <br>
<div>price: $10.99</div>
<div id="sametext">Available!</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
# Add a site in paused mode, add an invalid filter, we should still have visual selector data ready
def test_restock_detection(client, live_server):
set_original_response()
#assert os.getenv('PLAYWRIGHT_DRIVER_URL'), "Needs PLAYWRIGHT_DRIVER_URL set for this test"
time.sleep(1)
live_server_setup(live_server)
#####################
notification_url = url_for('test_notification_endpoint', _external=True).replace('http://localhost', 'http://changedet').replace('http', 'json')
#####################
# Set this up for when we remove the notification from the watch, it should fallback with these details
res = client.post(
url_for("settings_page"),
data={"application-notification_urls": notification_url,
"application-notification_title": "fallback-title "+default_notification_title,
"application-notification_body": "fallback-body "+default_notification_body,
"application-notification_format": default_notification_format,
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_webdriver"},
follow_redirects=True
)
# Add our URL to the import page, because the docker container (playwright/selenium) wont be able to connect to our usual test url
test_url = url_for('test_endpoint', _external=True).replace('http://localhost', 'http://changedet')
client.post(
url_for("form_quick_watch_add"),
data={"url": test_url, "tag": '', 'processor': 'restock_diff'},
follow_redirects=True
)
# Is it correctly show as NOT in stock?
wait_for_all_checks(client)
res = client.get(url_for("index"))
assert b'not-in-stock' in res.data
# Is it correctly shown as in stock
set_back_in_stock_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
res = client.get(url_for("index"))
assert b'not-in-stock' not in res.data
# We should have a notification
time.sleep(2)
assert os.path.isfile("test-datastore/notification.txt")
os.unlink("test-datastore/notification.txt")
# Default behaviour is to only fire notification when it goes OUT OF STOCK -> IN STOCK
# So here there should be no file, because we go IN STOCK -> OUT OF STOCK
set_original_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
wait_for_all_checks(client)
assert not os.path.isfile("test-datastore/notification.txt")

Binary file not shown.

View File

@@ -1,18 +1,34 @@
from . util import live_server_setup, extract_UUID_from_client
from flask import url_for from flask import url_for
from . util import live_server_setup import time
def test_check_access_control(app, client): def test_check_access_control(app, client, live_server):
# Still doesnt work, but this is closer. # Still doesnt work, but this is closer.
live_server_setup(live_server)
with app.test_client(use_cookies=True) as c: with app.test_client(use_cookies=True) as c:
# Check we don't have any password protection enabled yet. # Check we don't have any password protection enabled yet.
res = c.get(url_for("settings_page")) res = c.get(url_for("settings_page"))
assert b"Remove password" not in res.data assert b"Remove password" not in res.data
# Enable password check. # add something that we can hit via diff page later
res = c.post(
url_for("import_page"),
data={"urls": url_for('test_random_content_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(2)
res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
assert b'1 watches queued for rechecking.' in res.data
time.sleep(2)
# Enable password check and diff page access bypass
res = c.post( res = c.post(
url_for("settings_page"), url_for("settings_page"),
data={"application-password": "foobar", data={"application-password": "foobar",
"application-shared_diff_access": "True",
"requests-time_between_check-minutes": 180, "requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"}, 'application-fetch_backend': "html_requests"},
follow_redirects=True follow_redirects=True
@@ -22,9 +38,15 @@ def test_check_access_control(app, client):
# Check we hit the login # Check we hit the login
res = c.get(url_for("index"), follow_redirects=True) res = c.get(url_for("index"), follow_redirects=True)
# Should be logged out
assert b"Login" in res.data assert b"Login" in res.data
# The diff page should return something valid when logged out
res = client.get(url_for("diff_history_page", uuid="first"))
assert b'Random content' in res.data
# Menu should not be available yet # Menu should not be available yet
# assert b"SETTINGS" not in res.data # assert b"SETTINGS" not in res.data
# assert b"BACKUP" not in res.data # assert b"BACKUP" not in res.data
@@ -109,3 +131,25 @@ def test_check_access_control(app, client):
assert b"Password protection enabled" not in res.data assert b"Password protection enabled" not in res.data
# Now checking the diff access
# Enable password check and diff page access bypass
res = c.post(
url_for("settings_page"),
data={"application-password": "foobar",
# Should be disabled
# "application-shared_diff_access": "True",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Password protection enabled." in res.data
# Check we hit the login
res = c.get(url_for("index"), follow_redirects=True)
# Should be logged out
assert b"Login" in res.data
# The diff page should return something valid when logged out
res = client.get(url_for("diff_history_page", uuid="first"))
assert b'Random content' not in res.data

View File

@@ -0,0 +1,176 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup
from changedetectionio import html_tools
def set_original(excluding=None, add_line=None):
test_return_data = """<html>
<body>
<p>Some initial text</p>
<p>So let's see what happens.</p>
<p>and a new line!</p>
<p>The golden line</p>
<p>A BREAK TO MAKE THE TOP LINE STAY AS "REMOVED" OR IT WILL GET COUNTED AS "CHANGED INTO"</p>
<p>Something irrelevant</p>
</body>
</html>
"""
if add_line:
c=test_return_data.splitlines()
c.insert(5, add_line)
test_return_data = "\n".join(c)
if excluding:
output = ""
for i in test_return_data.splitlines():
if not excluding in i:
output += f"{i}\n"
test_return_data = output
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def test_setup(client, live_server):
live_server_setup(live_server)
def test_check_removed_line_contains_trigger(client, live_server):
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
set_original()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"trigger_text": 'The golden line',
"url": test_url,
'fetch_backend': "html_requests",
'filter_text_removed': 'y'},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(sleep_time_for_fetch_thread)
set_original(excluding='Something irrelevant')
# A line thats not the trigger should not trigger anything
res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
assert b'1 watches queued for rechecking.' in res.data
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# The trigger line is REMOVED, this should trigger
set_original(excluding='The golden line')
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Now add it back, and we should not get a trigger
client.get(url_for("mark_all_viewed"), follow_redirects=True)
set_original(excluding=None)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# Remove it again, and we should get a trigger
set_original(excluding='The golden line')
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_check_add_line_contains_trigger(client, live_server):
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
test_notification_url = url_for('test_notification_endpoint', _external=True).replace('http://', 'post://') + "?xxx={{ watch_url }}"
res = client.post(
url_for("settings_page"),
data={"application-notification_title": "New ChangeDetection.io Notification - {{ watch_url }}",
"application-notification_body": 'triggered text was -{{triggered_text}}-',
# https://github.com/caronc/apprise/wiki/Notify_Custom_JSON#get-parameter-manipulation
"application-notification_urls": test_notification_url,
"application-minutes_between_check": 180,
"application-fetch_backend": "html_requests"
},
follow_redirects=True
)
assert b'Settings updated' in res.data
set_original()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"trigger_text": 'Oh yes please',
"url": test_url,
'fetch_backend': "html_requests",
'filter_text_removed': '',
'filter_text_added': 'y'},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(sleep_time_for_fetch_thread)
set_original(excluding='Something irrelevant')
# A line thats not the trigger should not trigger anything
res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
assert b'1 watches queued for rechecking.' in res.data
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# The trigger line is ADDED, this should trigger
set_original(add_line='<p>Oh yes please</p>')
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
with open("test-datastore/notification.txt", 'r') as f:
response= f.read()
assert '-Oh yes please-' in response
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -11,10 +11,10 @@ import uuid
def set_original_response(): def set_original_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that will change</div> <div id="changetext">Some text that will change</div>
</body> </body>
@@ -29,10 +29,10 @@ def set_original_response():
def set_modified_response(): def set_modified_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>which has this one new line</p> <p>which has this one new line</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that changes</div> <div id="changetext">Some text that changes</div>
</body> </body>
@@ -53,14 +53,15 @@ def is_valid_uuid(val):
return False return False
def test_api_simple(client, live_server): def test_setup(client, live_server):
live_server_setup(live_server) live_server_setup(live_server)
def test_api_simple(client, live_server):
api_key = extract_api_key_from_UI(client) api_key = extract_api_key_from_UI(client)
# Create a watch # Create a watch
set_original_response() set_original_response()
watch_uuid = None
# Validate bad URL # Validate bad URL
test_url = url_for('test_endpoint', _external=True, test_url = url_for('test_endpoint', _external=True,
@@ -80,25 +81,34 @@ def test_api_simple(client, live_server):
headers={'content-type': 'application/json', 'x-api-key': api_key}, headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True follow_redirects=True
) )
s = json.loads(res.data)
assert is_valid_uuid(s['uuid']) assert is_valid_uuid(res.json.get('uuid'))
watch_uuid = s['uuid'] watch_uuid = res.json.get('uuid')
assert res.status_code == 201 assert res.status_code == 201
time.sleep(3) time.sleep(3)
# Verify its in the list and that recheck worked # Verify its in the list and that recheck worked
res = client.get( res = client.get(
url_for("createwatch"), url_for("createwatch", tag="OnE"),
headers={'x-api-key': api_key} headers={'x-api-key': api_key}
) )
assert watch_uuid in json.loads(res.data).keys() assert watch_uuid in res.json.keys()
before_recheck_info = json.loads(res.data)[watch_uuid] before_recheck_info = res.json[watch_uuid]
assert before_recheck_info['last_checked'] != 0 assert before_recheck_info['last_checked'] != 0
#705 `last_changed` should be zero on the first check #705 `last_changed` should be zero on the first check
assert before_recheck_info['last_changed'] == 0 assert before_recheck_info['last_changed'] == 0
assert before_recheck_info['title'] == 'My test URL' assert before_recheck_info['title'] == 'My test URL'
# Check the limit by tag doesnt return anything when nothing found
res = client.get(
url_for("createwatch", tag="Something else entirely"),
headers={'x-api-key': api_key}
)
assert len(res.json) == 0
time.sleep(2)
set_modified_response() set_modified_response()
# Trigger recheck of all ?recheck_all=1 # Trigger recheck of all ?recheck_all=1
client.get( client.get(
@@ -112,7 +122,7 @@ def test_api_simple(client, live_server):
url_for("createwatch"), url_for("createwatch"),
headers={'x-api-key': api_key}, headers={'x-api-key': api_key},
) )
after_recheck_info = json.loads(res.data)[watch_uuid] after_recheck_info = res.json[watch_uuid]
assert after_recheck_info['last_checked'] != before_recheck_info['last_checked'] assert after_recheck_info['last_checked'] != before_recheck_info['last_checked']
assert after_recheck_info['last_changed'] != 0 assert after_recheck_info['last_changed'] != 0
@@ -121,12 +131,11 @@ def test_api_simple(client, live_server):
url_for("watchhistory", uuid=watch_uuid), url_for("watchhistory", uuid=watch_uuid),
headers={'x-api-key': api_key}, headers={'x-api-key': api_key},
) )
history = json.loads(res.data) assert len(res.json) == 2, "Should have two history entries (the original and the changed)"
assert len(history) == 2, "Should have two history entries (the original and the changed)"
# Fetch a snapshot by timestamp, check the right one was found # Fetch a snapshot by timestamp, check the right one was found
res = client.get( res = client.get(
url_for("watchsinglehistory", uuid=watch_uuid, timestamp=list(history.keys())[-1]), url_for("watchsinglehistory", uuid=watch_uuid, timestamp=list(res.json.keys())[-1]),
headers={'x-api-key': api_key}, headers={'x-api-key': api_key},
) )
assert b'which has this one new line' in res.data assert b'which has this one new line' in res.data
@@ -143,7 +152,7 @@ def test_api_simple(client, live_server):
url_for("watch", uuid=watch_uuid), url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key} headers={'x-api-key': api_key}
) )
watch = json.loads(res.data) watch = res.json
# @todo how to handle None/default global values? # @todo how to handle None/default global values?
assert watch['history_n'] == 2, "Found replacement history section, which is in its own API" assert watch['history_n'] == 2, "Found replacement history section, which is in its own API"
@@ -152,10 +161,46 @@ def test_api_simple(client, live_server):
url_for("systeminfo"), url_for("systeminfo"),
headers={'x-api-key': api_key}, headers={'x-api-key': api_key},
) )
info = json.loads(res.data) assert res.json.get('watch_count') == 1
assert info.get('watch_count') == 1 assert res.json.get('uptime') > 0.5
assert info.get('uptime') > 0.5
######################################################
# Mute and Pause, check it worked
res = client.get(
url_for("watch", uuid=watch_uuid, paused='paused'),
headers={'x-api-key': api_key}
)
assert b'OK' in res.data
res = client.get(
url_for("watch", uuid=watch_uuid, muted='muted'),
headers={'x-api-key': api_key}
)
assert b'OK' in res.data
res = client.get(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key}
)
assert res.json.get('paused') == True
assert res.json.get('notification_muted') == True
# Now unpause, unmute
res = client.get(
url_for("watch", uuid=watch_uuid, muted='unmuted'),
headers={'x-api-key': api_key}
)
assert b'OK' in res.data
res = client.get(
url_for("watch", uuid=watch_uuid, paused='unpaused'),
headers={'x-api-key': api_key}
)
assert b'OK' in res.data
res = client.get(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key}
)
assert res.json.get('paused') == 0
assert res.json.get('notification_muted') == 0
######################################################
# Finally delete the watch # Finally delete the watch
res = client.delete( res = client.delete(
@@ -169,9 +214,7 @@ def test_api_simple(client, live_server):
url_for("createwatch"), url_for("createwatch"),
headers={'x-api-key': api_key} headers={'x-api-key': api_key}
) )
watch_list = json.loads(res.data) assert len(res.json) == 0, "Watch list should be empty"
assert len(watch_list) == 0, "Watch list should be empty"
def test_access_denied(client, live_server): def test_access_denied(client, live_server):
# `config_api_token_enabled` Should be On by default # `config_api_token_enabled` Should be On by default
@@ -203,3 +246,97 @@ def test_access_denied(client, live_server):
url_for("createwatch") url_for("createwatch")
) )
assert res.status_code == 200 assert res.status_code == 200
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-fetch_backend": "html_requests",
"application-api_access_token_enabled": "y"
},
follow_redirects=True
)
assert b"Settings updated." in res.data
def test_api_watch_PUT_update(client, live_server):
#live_server_setup(live_server)
api_key = extract_api_key_from_UI(client)
time.sleep(1)
# Create a watch
set_original_response()
test_url = url_for('test_endpoint', _external=True,
headers={'x-api-key': api_key}, )
# Create new
res = client.post(
url_for("createwatch"),
data=json.dumps({"url": test_url, 'tag': "One, Two", "title": "My test URL", 'headers': {'cookie': 'yum'} }),
headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True
)
assert res.status_code == 201
time.sleep(1)
# Get a listing, it will be the first one
res = client.get(
url_for("createwatch"),
headers={'x-api-key': api_key}
)
watch_uuid = list(res.json.keys())[0]
# Check in the edit page just to be sure
res = client.get(
url_for("edit_page", uuid=watch_uuid),
)
assert b"cookie: yum" in res.data, "'cookie: yum' found in 'headers' section"
# HTTP PUT ( UPDATE an existing watch )
res = client.put(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key, 'content-type': 'application/json'},
data=json.dumps({"title": "new title", 'time_between_check': {'minutes': 552}, 'headers': {'cookie': 'all eaten'}}),
)
assert res.status_code == 200, "HTTP PUT update was sent OK"
# HTTP GET single watch, title should be updated
res = client.get(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key}
)
assert res.json.get('title') == 'new title'
# Check in the edit page just to be sure
res = client.get(
url_for("edit_page", uuid=watch_uuid),
)
assert b"new title" in res.data, "new title found in edit page"
assert b"552" in res.data, "552 minutes found in edit page"
assert b"One, Two" in res.data, "Tag 'One, Two' was found"
assert b"cookie: all eaten" in res.data, "'cookie: all eaten' found in 'headers' section"
######################################################
# HTTP PUT try a field that doenst exist
# HTTP PUT an update
res = client.put(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key, 'content-type': 'application/json'},
data=json.dumps({"title": "new title", "some other field": "uh oh"}),
)
assert res.status_code == 400, "Should get error 400 when we give a field that doesnt exist"
# Message will come from `flask_expects_json`
assert b'Additional properties are not allowed' in res.data
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,146 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup, extract_UUID_from_client, extract_api_key_from_UI
def set_response_with_ldjson():
test_return_data = """<html>
<body>
Some initial text<br>
<p>Which is across multiple lines</p>
<br>
So let's see what happens. <br>
<div class="sametext">Some text thats the same</div>
<div class="changetext">Some text that will change</div>
<script type="application/ld+json">
{
"@context":"https://schema.org/",
"@type":"Product",
"@id":"https://www.some-virtual-phone-shop.com/celular-iphone-14/p",
"name":"Celular Iphone 14 Pro Max 256Gb E Sim A16 Bionic",
"brand":{
"@type":"Brand",
"name":"APPLE"
},
"image":"https://www.some-virtual-phone-shop.com/15509426/image.jpg",
"description":"You dont need it",
"mpn":"111111",
"sku":"22222",
"offers":{
"@type":"AggregateOffer",
"lowPrice":8097000,
"highPrice":8099900,
"priceCurrency":"COP",
"offers":[
{
"@type":"Offer",
"price":8097000,
"priceCurrency":"COP",
"availability":"http://schema.org/InStock",
"sku":"102375961",
"itemCondition":"http://schema.org/NewCondition",
"seller":{
"@type":"Organization",
"name":"ajax"
}
}
],
"offerCount":1
}
}
</script>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_response_without_ldjson():
test_return_data = """<html>
<body>
Some initial text<br>
<p>Which is across multiple lines</p>
<br>
So let's see what happens. <br>
<div class="sametext">Some text thats the same</div>
<div class="changetext">Some text that will change</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
# actually only really used by the distll.io importer, but could be handy too
def test_check_ldjson_price_autodetect(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
set_response_with_ldjson()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
# Should get a notice that it's available
res = client.get(url_for("index"))
assert b'ldjson-price-track-offer' in res.data
# Accept it
uuid = extract_UUID_from_client(client)
client.get(url_for('price_data_follower.accept', uuid=uuid, follow_redirects=True))
time.sleep(2)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(2)
# Offer should be gone
res = client.get(url_for("index"))
assert b'Embedded price data' not in res.data
assert b'tracking-ldjson-price-data' in res.data
# and last snapshop (via API) should be just the price
api_key = extract_api_key_from_UI(client)
res = client.get(
url_for("watchsinglehistory", uuid=uuid, timestamp='latest'),
headers={'x-api-key': api_key},
)
# Should see this (dont know where the whitespace came from)
assert b'"highPrice": 8099900' in res.data
# And not this cause its not the ld-json
assert b"So let's see what happens" not in res.data
client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
##########################################################################################
# And we shouldnt see the offer
set_response_without_ldjson()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
res = client.get(url_for("index"))
assert b'ldjson-price-track-offer' not in res.data
##########################################################################################
client.get(url_for("form_delete", uuid="all"), follow_redirects=True)

View File

@@ -3,7 +3,7 @@
import time import time
from flask import url_for from flask import url_for
from urllib.request import urlopen from urllib.request import urlopen
from .util import set_original_response, set_modified_response, live_server_setup, wait_for_all_checks from .util import set_original_response, set_modified_response, live_server_setup, wait_for_all_checks, extract_rss_token_from_UI
sleep_time_for_fetch_thread = 3 sleep_time_for_fetch_thread = 3
@@ -11,7 +11,7 @@ sleep_time_for_fetch_thread = 3
# Basic test to check inscriptus is not adding return line chars, basically works etc # Basic test to check inscriptus is not adding return line chars, basically works etc
def test_inscriptus(): def test_inscriptus():
from inscriptis import get_text from inscriptis import get_text
html_content = "<html><body>test!<br/>ok man</body></html>" html_content = "<html><body>test!<br>ok man</body></html>"
stripped_text_from_html = get_text(html_content) stripped_text_from_html = get_text(html_content)
assert stripped_text_from_html == 'test!\nok man' assert stripped_text_from_html == 'test!\nok man'
@@ -67,7 +67,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
# Force recheck # Force recheck
res = client.get(url_for("form_watch_checknow"), follow_redirects=True) res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
assert b'1 watches are queued for rechecking.' in res.data assert b'1 watches queued for rechecking.' in res.data
wait_for_all_checks(client) wait_for_all_checks(client)
@@ -76,12 +76,13 @@ def test_check_basic_change_detection_functionality(client, live_server):
assert b'unviewed' in res.data assert b'unviewed' in res.data
# #75, and it should be in the RSS feed # #75, and it should be in the RSS feed
res = client.get(url_for("rss")) rss_token = extract_rss_token_from_UI(client)
res = client.get(url_for("rss", token=rss_token, _external=True))
expected_url = url_for('test_endpoint', _external=True) expected_url = url_for('test_endpoint', _external=True)
assert b'<rss' in res.data assert b'<rss' in res.data
# re #16 should have the diff in here too # re #16 should have the diff in here too
assert b'(into ) which has this one new line' in res.data assert b'(into) which has this one new line' in res.data
assert b'CDATA' in res.data assert b'CDATA' in res.data
assert expected_url.encode('utf-8') in res.data assert expected_url.encode('utf-8') in res.data

View File

@@ -8,10 +8,10 @@ from changedetectionio import html_tools
def set_original_ignore_response(): def set_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
@@ -24,10 +24,10 @@ def set_original_ignore_response():
def set_modified_original_ignore_response(): def set_modified_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some NEW nice initial text</br> Some NEW nice initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<p>new ignore stuff</p> <p>new ignore stuff</p>
<p>out of stock</p> <p>out of stock</p>
<p>blah</p> <p>blah</p>
@@ -44,11 +44,11 @@ def set_modified_original_ignore_response():
def set_modified_response_minus_block_text(): def set_modified_response_minus_block_text():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some NEW nice initial text</br> Some NEW nice initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
<p>now on sale $2/p> <p>now on sale $2/p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<p>new ignore stuff</p> <p>new ignore stuff</p>
<p>blah</p> <p>blah</p>
</body> </body>
@@ -87,7 +87,10 @@ def test_check_block_changedetection_text_NOT_present(client, live_server):
# Add our URL to the import page # Add our URL to the import page
res = client.post( res = client.post(
url_for("edit_page", uuid="first"), url_for("edit_page", uuid="first"),
data={"text_should_not_be_present": ignore_text, "url": test_url, 'fetch_backend': "html_requests"}, data={"text_should_not_be_present": ignore_text,
"url": test_url,
'fetch_backend': "html_requests"
},
follow_redirects=True follow_redirects=True
) )
assert b"Updated watch." in res.data assert b"Updated watch." in res.data
@@ -129,7 +132,6 @@ def test_check_block_changedetection_text_NOT_present(client, live_server):
set_modified_response_minus_block_text() set_modified_response_minus_block_text()
client.get(url_for("form_watch_checknow"), follow_redirects=True) client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread) time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index")) res = client.get(url_for("index"))
assert b'unviewed' in res.data assert b'unviewed' in res.data

View File

@@ -12,10 +12,10 @@ def test_setup(live_server):
def set_original_response(): def set_original_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that will change</div> <div id="changetext">Some text that will change</div>
</body> </body>
@@ -29,10 +29,10 @@ def set_original_response():
def set_modified_response(): def set_modified_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>which has this one new line</p> <p>which has this one new line</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that changes</div> <div id="changetext">Some text that changes</div>
</body> </body>

View File

@@ -25,10 +25,10 @@ def set_original_response():
</ul> </ul>
</nav> </nav>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="changetext">Some text that will change</div> <div id="changetext">Some text that will change</div>
</body> </body>
<footer> <footer>
@@ -54,10 +54,10 @@ def set_modified_response():
</ul> </ul>
</nav> </nav>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="changetext">Some text that changes</div> <div id="changetext">Some text that changes</div>
</body> </body>
<footer> <footer>
@@ -71,7 +71,6 @@ def set_modified_response():
def test_element_removal_output(): def test_element_removal_output():
from changedetectionio import fetch_site_status
from inscriptis import get_text from inscriptis import get_text
# Check text with sub-parts renders correctly # Check text with sub-parts renders correctly
@@ -85,7 +84,7 @@ def test_element_removal_output():
</ul> </ul>
</nav> </nav>
<body> <body>
Some initial text</br> Some initial text<br>
<p>across multiple lines</p> <p>across multiple lines</p>
<div id="changetext">Some text that changes</div> <div id="changetext">Some text that changes</div>
</body> </body>

View File

@@ -59,6 +59,8 @@ def test_http_error_handler(client, live_server):
_runner_test_http_errors(client, live_server, 404, 'Page not found') _runner_test_http_errors(client, live_server, 404, 'Page not found')
_runner_test_http_errors(client, live_server, 500, '(Internal server Error) received') _runner_test_http_errors(client, live_server, 500, '(Internal server Error) received')
_runner_test_http_errors(client, live_server, 400, 'Error - Request returned a HTTP error code 400') _runner_test_http_errors(client, live_server, 400, 'Error - Request returned a HTTP error code 400')
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
# Just to be sure error text is properly handled # Just to be sure error text is properly handled
def test_DNS_errors(client, live_server): def test_DNS_errors(client, live_server):
@@ -81,4 +83,48 @@ def test_DNS_errors(client, live_server):
assert found_name_resolution_error assert found_name_resolution_error
# Should always record that we tried # Should always record that we tried
assert bytes("just now".encode('utf-8')) in res.data assert bytes("just now".encode('utf-8')) in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
# Re 1513
def test_low_level_errors_clear_correctly(client, live_server):
#live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write("<html><body><div id=here>Hello world</div></body></html>")
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": "https://dfkjasdkfjaidjfsdajfksdajfksdjfDOESNTEXIST.com"},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(2)
# We should see the DNS error
res = client.get(url_for("index"))
found_name_resolution_error = b"Temporary failure in name resolution" in res.data or b"Name or service not known" in res.data
assert found_name_resolution_error
# Update with what should work
client.post(
url_for("edit_page", uuid="first"),
data={
"url": test_url,
"fetch_backend": "html_requests"},
follow_redirects=True
)
# Now the error should be gone
time.sleep(2)
res = client.get(url_for("index"))
found_name_resolution_error = b"Temporary failure in name resolution" in res.data or b"Name or service not known" in res.data
assert not found_name_resolution_error
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -10,10 +10,10 @@ from ..html_tools import *
def set_original_response(): def set_original_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div class="changetext">Some text that will change</div> <div class="changetext">Some text that will change</div>
</body> </body>
@@ -28,12 +28,12 @@ def set_original_response():
def set_modified_response(): def set_modified_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>which has this one new line</p> <p>which has this one new line</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="sametext">Some text thats the same</div> <div id="sametext">Some text thats the same</div>
<div class="changetext">Some text that did change ( 1000 online <br/> 80 guests<br/> 2000 online )</div> <div class="changetext">Some text that did change ( 1000 online <br> 80 guests<br> 2000 online )</div>
<div class="changetext">SomeCase insensitive 3456</div> <div class="changetext">SomeCase insensitive 3456</div>
</body> </body>
</html> </html>
@@ -49,8 +49,8 @@ def set_multiline_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
<p>Something <br/> <p>Something <br>
across 6 billion multiple<br/> across 6 billion multiple<br>
lines lines
</p> </p>

View File

@@ -11,10 +11,10 @@ from changedetectionio.model import App
def set_response_without_filter(): def set_response_without_filter():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="nope-doesnt-exist">Some text thats the same</div> <div id="nope-doesnt-exist">Some text thats the same</div>
</body> </body>
</html> </html>
@@ -28,10 +28,10 @@ def set_response_without_filter():
def set_response_with_filter(): def set_response_with_filter():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div class="ticket-available">Ticket now on sale!</div> <div class="ticket-available">Ticket now on sale!</div>
</body> </body>
</html> </html>
@@ -117,18 +117,3 @@ def test_filter_doesnt_exist_then_exists_should_get_notification(client, live_se
assert 'Ticket now on sale' in notification assert 'Ticket now on sale' in notification
os.unlink("test-datastore/notification.txt") os.unlink("test-datastore/notification.txt")
# Test that if it gets removed, then re-added, we get a notification
# Remove the target and re-add it, we should get a new notification
set_response_without_filter()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
assert not os.path.isfile("test-datastore/notification.txt")
set_response_with_filter()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
assert os.path.isfile("test-datastore/notification.txt")
# Also test that the filter was updated after the first one was requested

View File

@@ -1,18 +1,17 @@
import os import os
import time import time
import re
from flask import url_for from flask import url_for
from .util import set_original_response, live_server_setup from .util import set_original_response, live_server_setup, extract_UUID_from_client
from changedetectionio.model import App from changedetectionio.model import App
def set_response_with_filter(): def set_response_with_filter():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<div id="nope-doesnt-exist">Some text thats the same</div> <div id="nope-doesnt-exist">Some text thats the same</div>
</body> </body>
</html> </html>
@@ -121,6 +120,10 @@ def run_filter_test(client, content_filter):
notification = f.read() notification = f.read()
assert not 'CSS/xPath filter was not present in the page' in notification assert not 'CSS/xPath filter was not present in the page' in notification
# Re #1247 - All tokens got replaced
uuid = extract_UUID_from_client(client)
assert uuid in notification
# cleanup for the next # cleanup for the next
client.get( client.get(
url_for("form_delete", uuid="all"), url_for("form_delete", uuid="all"),
@@ -142,4 +145,4 @@ def test_check_xpath_filter_failure_notification(client, live_server):
time.sleep(1) time.sleep(1)
run_filter_test(client, '//*[@id="nope-doesnt-exist"]') run_filter_test(client, '//*[@id="nope-doesnt-exist"]')
# Test that notification is never sent # Test that notification is never sent

View File

@@ -6,11 +6,11 @@ from ..html_tools import html_to_text
def test_html_to_text_func(): def test_html_to_text_func():
test_html = """<html> test_html = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
<a href="/first_link"> More Text </a> <a href="/first_link"> More Text </a>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<a href="second_link.com"> Even More Text </a> <a href="second_link.com"> Even More Text </a>
</body> </body>
</html> </html>
@@ -21,7 +21,7 @@ def test_html_to_text_func():
no_links_text = \ no_links_text = \
"Some initial text\n\nWhich is across multiple " \ "Some initial text\n\nWhich is across multiple " \
"lines\n\nMore Text So let's see what happens. Even More Text" "lines\n\nMore Text\nSo let's see what happens.\nEven More Text"
# check that no links are in the extracted text # check that no links are in the extracted text
assert text_content == no_links_text assert text_content == no_links_text
@@ -31,7 +31,7 @@ def test_html_to_text_func():
links_text = \ links_text = \
"Some initial text\n\nWhich is across multiple lines\n\n[ More Text " \ "Some initial text\n\nWhich is across multiple lines\n\n[ More Text " \
"](/first_link) So let's see what happens. [ Even More Text ]" \ "](/first_link)\nSo let's see what happens.\n[ Even More Text ]" \
"(second_link.com)" "(second_link.com)"
# check that links are present in the extracted text # check that links are present in the extracted text

View File

@@ -1,7 +1,5 @@
#!/usr/bin/python3 #!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup from . util import live_server_setup
from changedetectionio import html_tools from changedetectionio import html_tools
@@ -11,7 +9,7 @@ def test_setup(live_server):
# Unit test of the stripper # Unit test of the stripper
# Always we are dealing in utf-8 # Always we are dealing in utf-8
def test_strip_regex_text_func(): def test_strip_regex_text_func():
from changedetectionio import fetch_site_status from ..processors import text_json_diff as fetch_site_status
test_content = """ test_content = """
but sometimes we want to remove the lines. but sometimes we want to remove the lines.

View File

@@ -11,7 +11,8 @@ def test_setup(live_server):
# Unit test of the stripper # Unit test of the stripper
# Always we are dealing in utf-8 # Always we are dealing in utf-8
def test_strip_text_func(): def test_strip_text_func():
from changedetectionio import fetch_site_status from ..processors import text_json_diff as fetch_site_status
test_content = """ test_content = """
Some content Some content
@@ -33,10 +34,10 @@ def test_strip_text_func():
def set_original_ignore_response(): def set_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
@@ -49,10 +50,10 @@ def set_original_ignore_response():
def set_modified_original_ignore_response(): def set_modified_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some NEW nice initial text</br> Some NEW nice initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
<p>new ignore stuff</p> <p>new ignore stuff</p>
<p>blah</p> <p>blah</p>
</body> </body>
@@ -68,11 +69,11 @@ def set_modified_original_ignore_response():
def set_modified_ignore_response(): def set_modified_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
<P>ZZZZz</P> <P>ZZZZz</P>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>

View File

@@ -12,10 +12,10 @@ def test_setup(live_server):
def set_original_ignore_response(): def set_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<a href="/original_link"> Some More Text </a> <a href="/original_link"> Some More Text </a>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
""" """
@@ -29,10 +29,10 @@ def set_original_ignore_response():
def set_modified_ignore_response(): def set_modified_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<a href="/modified_link"> Some More Text </a> <a href="/modified_link"> Some More Text </a>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
""" """

View File

@@ -12,10 +12,10 @@ def test_setup(live_server):
def set_original_response(): def set_original_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
""" """
@@ -27,10 +27,10 @@ def set_original_response():
def set_some_changed_response(): def set_some_changed_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines, and a new thing too.</p> <p>Which is across multiple lines, and a new thing too.</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>
""" """

View File

@@ -12,15 +12,15 @@ def test_setup(live_server):
def set_original_ignore_response_but_with_whitespace(): def set_original_ignore_response_but_with_whitespace():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p> <p>
Which is across multiple lines</p> Which is across multiple lines</p>
<br> <br>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
@@ -34,10 +34,10 @@ def set_original_ignore_response_but_with_whitespace():
def set_original_ignore_response(): def set_original_ignore_response():
test_return_data = """<html> test_return_data = """<html>
<body> <body>
Some initial text</br> Some initial text<br>
<p>Which is across multiple lines</p> <p>Which is across multiple lines</p>
</br> <br>
So let's see what happens. </br> So let's see what happens. <br>
</body> </body>
</html> </html>

View File

@@ -3,7 +3,7 @@
import time import time
from flask import url_for, escape from flask import url_for, escape
from . util import live_server_setup from . util import live_server_setup, wait_for_all_checks
import pytest import pytest
jq_support = True jq_support = True
@@ -64,6 +64,24 @@ and it can also be repeated
with pytest.raises(html_tools.JSONNotFound) as e_info: with pytest.raises(html_tools.JSONNotFound) as e_info:
html_tools.extract_json_as_string('COMPLETE GIBBERISH, NO JSON!', "jq:.id") html_tools.extract_json_as_string('COMPLETE GIBBERISH, NO JSON!', "jq:.id")
def test_unittest_inline_extract_body():
content = """
<html>
<head></head>
<body>
<pre style="word-wrap: break-word; white-space: pre-wrap;">
{"testKey": 42}
</pre>
</body>
</html>
"""
from .. import html_tools
# See that we can find the second <script> one, which is not broken, and matches our filter
text = html_tools.extract_json_as_string(content, "json:$.testKey")
assert text == '42'
def set_original_ext_response(): def set_original_ext_response():
data = """ data = """
[ [
@@ -198,8 +216,8 @@ def test_check_json_without_filter(client, live_server):
) )
# Should still see '"html": "<b>"' # Should still see '"html": "<b>"'
assert b'&#34;&lt;b&gt;' in res.data assert b'&#34;html&#34;: &#34;&lt;b&gt;&#34;' in res.data
assert res.data.count(b'{\n') >= 2 assert res.data.count(b'{') >= 2
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True) res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data assert b'Deleted' in res.data
@@ -394,6 +412,79 @@ def check_json_ext_filter(json_filter, client, live_server):
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True) res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data assert b'Deleted' in res.data
def test_ignore_json_order(client, live_server):
# A change in order shouldn't trigger a notification
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write('{"hello" : 123, "world": 123}')
# Add our URL to the import page
test_url = url_for('test_endpoint', content_type="application/json", _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(2)
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write('{"world" : 123, "hello": 123}')
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(2)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# Just to be sure it still works
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write('{"world" : 123, "hello": 124}')
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(2)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_correct_header_detect(client, live_server):
# Like in https://github.com/dgtlmoon/changedetection.io/pull/1593
# Specify extra html that JSON is sometimes wrapped in - when using Browserless/Puppeteer etc
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write('<html><body>{"hello" : 123, "world": 123}')
# Add our URL to the import page
# Check weird casing is cleaned up and detected also
test_url = url_for('test_endpoint', content_type="aPPlication/JSon", uppercase_headers=True, _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
wait_for_all_checks(client)
res = client.get(url_for("index"))
# Fixed in #1593
assert b'No parsable JSON found in this document' not in res.data
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b'&#34;world&#34;:' in res.data
assert res.data.count(b'{') >= 2
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_check_jsonpath_ext_filter(client, live_server): def test_check_jsonpath_ext_filter(client, live_server):
check_json_ext_filter('json:$[?(@.status==Sold)]', client, live_server) check_json_ext_filter('json:$[?(@.status==Sold)]', client, live_server)

View File

@@ -73,16 +73,12 @@ def test_check_notification(client, live_server):
# We write the PNG to disk, but a JPEG should appear in the notification # We write the PNG to disk, but a JPEG should appear in the notification
# Write the last screenshot png # Write the last screenshot png
testimage_png = 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII=' testimage_png = 'iVBORw0KGgoAAAANSUhEUgAAAAEAAAABCAQAAAC1HAwCAAAAC0lEQVR42mNkYAAAAAYAAjCB0C8AAAAASUVORK5CYII='
# This one is created when we save the screenshot from the webdriver/playwright session (converted from PNG)
testimage_jpg = '/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAAMCAgMCAgMDAwMEAwMEBQgFBQQEBQoHBwYIDAoMDAsKCwsNDhIQDQ4RDgsLEBYQERMUFRUVDA8XGBYUGBIUFRT/wAALCAABAAEBAREA/8QAFAABAAAAAAAAAAAAAAAAAAAACf/EABQQAQAAAAAAAAAAAAAAAAAAAAD/2gAIAQEAAD8AKp//2Q=='
uuid = extract_UUID_from_client(client) uuid = extract_UUID_from_client(client)
datastore = 'test-datastore' datastore = 'test-datastore'
with open(os.path.join(datastore, str(uuid), 'last-screenshot.png'), 'wb') as f: with open(os.path.join(datastore, str(uuid), 'last-screenshot.png'), 'wb') as f:
f.write(base64.b64decode(testimage_png)) f.write(base64.b64decode(testimage_png))
with open(os.path.join(datastore, str(uuid), 'last-screenshot.jpg'), 'wb') as f:
f.write(base64.b64decode(testimage_jpg))
# Goto the edit page, add our ignore text # Goto the edit page, add our ignore text
# Add our URL to the import page # Add our URL to the import page
@@ -100,6 +96,8 @@ def test_check_notification(client, live_server):
"Diff URL: {{diff_url}}\n" "Diff URL: {{diff_url}}\n"
"Snapshot: {{current_snapshot}}\n" "Snapshot: {{current_snapshot}}\n"
"Diff: {{diff}}\n" "Diff: {{diff}}\n"
"Diff Added: {{diff_added}}\n"
"Diff Removed: {{diff_removed}}\n"
"Diff Full: {{diff_full}}\n" "Diff Full: {{diff_full}}\n"
":-)", ":-)",
"notification_screenshot": True, "notification_screenshot": True,
@@ -147,7 +145,7 @@ def test_check_notification(client, live_server):
assert ':-)' in notification_submission assert ':-)' in notification_submission
assert "Diff Full: Some initial text" in notification_submission assert "Diff Full: Some initial text" in notification_submission
assert "Diff: (changed) Which is across multiple lines" in notification_submission assert "Diff: (changed) Which is across multiple lines" in notification_submission
assert "(into ) which has this one new line" in notification_submission assert "(into) which has this one new line" in notification_submission
# Re #342 - check for accidental python byte encoding of non-utf8/string # Re #342 - check for accidental python byte encoding of non-utf8/string
assert "b'" not in notification_submission assert "b'" not in notification_submission
assert re.search('Watch UUID: [0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}', notification_submission, re.IGNORECASE) assert re.search('Watch UUID: [0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}', notification_submission, re.IGNORECASE)
@@ -160,12 +158,12 @@ def test_check_notification(client, live_server):
# Check the attachment was added, and that it is a JPEG from the original PNG # Check the attachment was added, and that it is a JPEG from the original PNG
notification_submission_object = json.loads(notification_submission) notification_submission_object = json.loads(notification_submission)
assert notification_submission_object['attachments'][0]['filename'] == 'last-screenshot.jpg' # We keep PNG screenshots for now
assert notification_submission_object['attachments'][0]['filename'] == 'last-screenshot.png'
assert len(notification_submission_object['attachments'][0]['base64']) assert len(notification_submission_object['attachments'][0]['base64'])
assert notification_submission_object['attachments'][0]['mimetype'] == 'image/jpeg' assert notification_submission_object['attachments'][0]['mimetype'] == 'image/png'
jpeg_in_attachment = base64.b64decode(notification_submission_object['attachments'][0]['base64']) jpeg_in_attachment = base64.b64decode(notification_submission_object['attachments'][0]['base64'])
assert b'JFIF' in jpeg_in_attachment
assert testimage_png not in notification_submission
# Assert that the JPEG is readable (didn't get chewed up somewhere) # Assert that the JPEG is readable (didn't get chewed up somewhere)
from PIL import Image from PIL import Image
import io import io
@@ -297,7 +295,10 @@ def test_notification_custom_endpoint_and_jinja2(client, live_server):
follow_redirects=True follow_redirects=True
) )
assert b'Settings updated' in res.data assert b'Settings updated' in res.data
client.get(
url_for("form_delete", uuid="all"),
follow_redirects=True
)
# Add a watch and trigger a HTTP POST # Add a watch and trigger a HTTP POST
test_url = url_for('test_endpoint', _external=True) test_url = url_for('test_endpoint', _external=True)
res = client.post( res = client.post(

View File

@@ -0,0 +1,40 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import set_original_response, set_modified_response, live_server_setup
sleep_time_for_fetch_thread = 3
# `subtractive_selectors` should still work in `source:` type requests
def test_fetch_pdf(client, live_server):
import shutil
shutil.copy("tests/test.pdf", "test-datastore/endpoint-test.pdf")
live_server_setup(live_server)
test_url = url_for('test_pdf_endpoint', _external=True)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b'PDF-1.5' not in res.data
assert b'hello world' in res.data
# So we know if the file changes in other ways
import hashlib
md5 = hashlib.md5(open("test-datastore/endpoint-test.pdf", 'rb').read()).hexdigest().upper()
# We should have one
assert len(md5) >0
# And it's going to be in the document
assert b'Document checksum - '+bytes(str(md5).encode('utf-8')) in res.data

View File

@@ -1,7 +1,8 @@
import json import json
import os
import time import time
from flask import url_for from flask import url_for
from . util import set_original_response, set_modified_response, live_server_setup from . util import set_original_response, set_modified_response, live_server_setup, wait_for_all_checks, extract_UUID_from_client
def test_setup(live_server): def test_setup(live_server):
live_server_setup(live_server) live_server_setup(live_server)
@@ -9,8 +10,12 @@ def test_setup(live_server):
# Hard to just add more live server URLs when one test is already running (I think) # Hard to just add more live server URLs when one test is already running (I think)
# So we add our test here (was in a different file) # So we add our test here (was in a different file)
def test_headers_in_request(client, live_server): def test_headers_in_request(client, live_server):
#live_server_setup(live_server)
# Add our URL to the import page # Add our URL to the import page
test_url = url_for('test_headers', _external=True) test_url = url_for('test_headers', _external=True)
if os.getenv('PLAYWRIGHT_DRIVER_URL'):
# Because its no longer calling back to localhost but from browserless, set in test-only.yml
test_url = test_url.replace('localhost', 'changedet')
# Add the test URL twice, we will check # Add the test URL twice, we will check
res = client.post( res = client.post(
@@ -29,7 +34,7 @@ def test_headers_in_request(client, live_server):
) )
assert b"1 Imported" in res.data assert b"1 Imported" in res.data
time.sleep(3) wait_for_all_checks(client)
cookie_header = '_ga=GA1.2.1022228332; cookie-preferences=analytics:accepted;' cookie_header = '_ga=GA1.2.1022228332; cookie-preferences=analytics:accepted;'
@@ -39,7 +44,7 @@ def test_headers_in_request(client, live_server):
data={ data={
"url": test_url, "url": test_url,
"tag": "", "tag": "",
"fetch_backend": "html_requests", "fetch_backend": 'html_webdriver' if os.getenv('PLAYWRIGHT_DRIVER_URL') else 'html_requests',
"headers": "xxx:ooo\ncool:yeah\r\ncookie:"+cookie_header}, "headers": "xxx:ooo\ncool:yeah\r\ncookie:"+cookie_header},
follow_redirects=True follow_redirects=True
) )
@@ -47,7 +52,7 @@ def test_headers_in_request(client, live_server):
# Give the thread time to pick up the first version # Give the thread time to pick up the first version
time.sleep(5) wait_for_all_checks(client)
# The service should echo back the request headers # The service should echo back the request headers
res = client.get( res = client.get(
@@ -63,7 +68,7 @@ def test_headers_in_request(client, live_server):
from html import escape from html import escape
assert escape(cookie_header).encode('utf-8') in res.data assert escape(cookie_header).encode('utf-8') in res.data
time.sleep(5) wait_for_all_checks(client)
# Re #137 - Examine the JSON index file, it should have only one set of headers entered # Re #137 - Examine the JSON index file, it should have only one set of headers entered
watches_with_headers = 0 watches_with_headers = 0
@@ -79,6 +84,9 @@ def test_headers_in_request(client, live_server):
def test_body_in_request(client, live_server): def test_body_in_request(client, live_server):
# Add our URL to the import page # Add our URL to the import page
test_url = url_for('test_body', _external=True) test_url = url_for('test_body', _external=True)
if os.getenv('PLAYWRIGHT_DRIVER_URL'):
# Because its no longer calling back to localhost but from browserless, set in test-only.yml
test_url = test_url.replace('localhost', 'cdio')
res = client.post( res = client.post(
url_for("import_page"), url_for("import_page"),
@@ -167,6 +175,9 @@ def test_body_in_request(client, live_server):
def test_method_in_request(client, live_server): def test_method_in_request(client, live_server):
# Add our URL to the import page # Add our URL to the import page
test_url = url_for('test_method', _external=True) test_url = url_for('test_method', _external=True)
if os.getenv('PLAYWRIGHT_DRIVER_URL'):
# Because its no longer calling back to localhost but from browserless, set in test-only.yml
test_url = test_url.replace('localhost', 'cdio')
# Add the test URL twice, we will check # Add the test URL twice, we will check
res = client.post( res = client.post(
@@ -234,3 +245,76 @@ def test_method_in_request(client, live_server):
# Should be only one with method set to PATCH # Should be only one with method set to PATCH
assert watches_with_method == 1 assert watches_with_method == 1
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_headers_textfile_in_request(client, live_server):
#live_server_setup(live_server)
# Add our URL to the import page
test_url = url_for('test_headers', _external=True)
if os.getenv('PLAYWRIGHT_DRIVER_URL'):
# Because its no longer calling back to localhost but from browserless, set in test-only.yml
test_url = test_url.replace('localhost', 'cdio')
print ("TEST URL IS ",test_url)
# Add the test URL twice, we will check
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(1)
# Add some headers to a request
res = client.post(
url_for("edit_page", uuid="first"),
data={
"url": test_url,
"tag": "testtag",
"fetch_backend": 'html_webdriver' if os.getenv('PLAYWRIGHT_DRIVER_URL') else 'html_requests',
"headers": "xxx:ooo\ncool:yeah\r\n"},
follow_redirects=True
)
assert b"Updated watch." in res.data
wait_for_all_checks(client)
with open('test-datastore/headers-testtag.txt', 'w') as f:
f.write("tag-header: test")
with open('test-datastore/headers.txt', 'w') as f:
f.write("global-header: nice\r\nnext-global-header: nice")
with open('test-datastore/'+extract_UUID_from_client(client)+'/headers.txt', 'w') as f:
f.write("watch-header: nice")
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
wait_for_all_checks(client)
res = client.get(url_for("edit_page", uuid="first"))
assert b"Extra headers file found and will be added to this watch" in res.data
# Not needed anymore
os.unlink('test-datastore/headers.txt')
os.unlink('test-datastore/headers-testtag.txt')
os.unlink('test-datastore/'+extract_UUID_from_client(client)+'/headers.txt')
# The service should echo back the request verb
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b"Global-Header:nice" in res.data
assert b"Next-Global-Header:nice" in res.data
assert b"Xxx:ooo" in res.data
assert b"Watch-Header:nice" in res.data
assert b"Tag-Header:test" in res.data
#unlink headers.txt on start/stop
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,39 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import set_original_response, set_modified_response, live_server_setup, wait_for_all_checks, extract_rss_token_from_UI
def test_rss_and_token(client, live_server):
set_original_response()
live_server_setup(live_server)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": url_for('test_random_content_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
rss_token = extract_rss_token_from_UI(client)
time.sleep(2)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(2)
# Add our URL to the import page
res = client.get(
url_for("rss", token="bad token", _external=True),
follow_redirects=True
)
assert b"Access denied, bad token" in res.data
res = client.get(
url_for("rss", token=rss_token, _external=True),
follow_redirects=True
)
assert b"Access denied, bad token" not in res.data
assert b"Random content" in res.data

Some files were not shown because too many files have changed in this diff Show More