Compare commits

...

390 Commits

Author SHA1 Message Date
dgtlmoon
44b2159140 example memory usage that isnt cleared 2022-07-28 20:55:01 +02:00
dgtlmoon
3c9d2ded38 0.39.17 2022-07-28 13:07:51 +02:00
dgtlmoon
9f4364a130 Add https://discord.com/api notification hook to the automatic truncation due to Discords 2000 char limit 2022-07-28 12:34:55 +02:00
dgtlmoon
5bd9eaf99d UI Feature - Add watch in "paused" state, saving then unpauses (#779) 2022-07-28 12:13:26 +02:00
dgtlmoon
b1c51c0a65 Enhancement - support xPath text() function filter, for example "//title/text()" in RSS feeds (#778) 2022-07-28 11:50:31 +02:00
dgtlmoon
232bd92389 Bug fix - Filter "Only trigger when new lines appear" should check all history, not only the first item (#777) 2022-07-28 10:16:19 +02:00
dgtlmoon
e6173357a9 Visual Selector direct element finder fix 2022-07-28 09:19:10 +02:00
dgtlmoon
f2b8888aff Update README.md 2022-07-27 14:25:24 +02:00
dgtlmoon
9c46f175f9 Update README.md links 2022-07-27 14:23:18 +02:00
dgtlmoon
1f27865fdf Filter failure notification send default enable now controlled by setting Env var 2022-07-27 00:01:51 +02:00
dgtlmoon
faa42d75e0 Refactor of extract text filter - Regex, support Regex (groups) and all python regex flags via /something/aiLmsux (#773) 2022-07-26 17:34:34 +02:00
dgtlmoon
3b6e6d85bb Update README.md - adding LinkedIn link 2022-07-25 00:28:41 +02:00
dgtlmoon
30d6a272ce Update README.md - Adding Discord and YouTube links 2022-07-24 23:06:42 +02:00
dgtlmoon
291700554e Bug fix for alerting when xPath based filters are no longer present (#772) 2022-07-23 19:39:52 +02:00
dgtlmoon
a82fad7059 Send notification when CSS/xPath filter is missing after more than 6 (configurable) attempts (#771) 2022-07-23 17:19:00 +02:00
dgtlmoon
c2fe5ae0d1 mailto plaintext handling fix for 'plaintext' apprise integration 2022-07-23 16:55:31 +02:00
dgtlmoon
5beefdb7cc Minor code cleanups 2022-07-23 13:18:44 +02:00
dgtlmoon
872bbba71c Notifications - email - Correctly send plaintext notification email with plaintext header (#767) 2022-07-21 15:22:20 +02:00
Jonathon Sisson
d578de1a35 Form text tweak - Regex clarification (#766) 2022-07-21 10:05:59 +02:00
dgtlmoon
cdc104be10 Update README.md 2022-07-20 14:37:45 +02:00
dgtlmoon
dd0eeca056 Handle simple obfuscations - HomeDepot.com style price obfuscation (#764) 2022-07-20 14:02:22 +02:00
dgtlmoon
a95468be08 Fixing docker-compose.yml PLAYWRIGHT_DRIVER_URL example URL 2022-07-15 20:45:29 +02:00
Brandon Wees
ace44d0e00 Notifications fix - Discord - added discord webhook base url to truncation rules (#753)
Co-authored-by: bwees <branonwees@gmail.com>
2022-07-14 17:41:12 +02:00
dgtlmoon
ebb8b88621 Update Playwright URI Env example with stealth setting and CORS workaround (more reliable fetching) 2022-07-12 22:36:22 +02:00
dgtlmoon
12fc2200de remove extra file 2022-07-12 22:32:20 +02:00
dgtlmoon
52d3d375ba removing package-lock.json - not required to be in git 2022-07-10 20:29:11 +02:00
dgtlmoon
08117089e6 Share-icon cleanups 2022-07-10 20:24:49 +02:00
dgtlmoon
2ba3a6d53f Test improvement: Extract text should return all matches 2022-07-10 20:05:48 +02:00
dgtlmoon
2f636553a9 Bug fix: RSS Feed should also announce utf-8 charset 2022-07-10 18:50:21 +02:00
Freddie Leeman
0bde48b282 Regex extract filter: Return all regex results instead of first match (#730) 2022-07-10 15:09:10 +02:00
dgtlmoon
fae1164c0b Ability to specify JS before running change-detection (#744) 2022-07-10 13:56:01 +02:00
dgtlmoon
169c293143 Playwright - log console errors to output 2022-07-10 13:55:29 +02:00
dgtlmoon
46cb5cff66 UI Improvement - Clarifying "Visual Filter" tool as "Visual Selector Filter" 2022-07-10 12:51:12 +02:00
Simo Elalj
05584ea886 Use environment variables to override new watch settings defaults (user-agent, timeout, workers) (#742) 2022-07-08 20:50:04 +02:00
marvin8
176a591357 Update docker-compose.yml - Remove duplicate environment variables from playwright-chrome sample config in docker-compose.yml (#738) 2022-07-06 09:03:10 +02:00
dgtlmoon
15569f9592 0.39.16 2022-07-05 16:14:57 +02:00
dgtlmoon
5f9e475fe0 Fix notification apprise application name to changedetection.io #731 2022-06-30 23:11:03 +02:00
dgtlmoon
34b8784f50 Update README.md 2022-06-30 16:16:58 +02:00
dgtlmoon
2b054ced8c [new filter] Filter option - Trigger only when NEW content (lines) are detected ( compared to earlier text snapshots ) (#685) 2022-06-28 18:34:32 +02:00
dgtlmoon
6553980cd5 Playwright - Use HTTP Request Headers override (Cookie, etc) 2022-06-25 23:42:48 +02:00
jtagcat
7c12c47204 lang: prefer 'clear (snap) history' to 'scrub' (#721) 2022-06-25 13:43:57 +02:00
dgtlmoon
dbd9b470d7 Minor diff page improvements - list should be sorted 'newest first' and no need to include the current version to compare against (#716) 2022-06-23 10:16:05 +02:00
dgtlmoon
83555a9991 bug fix: last_changed was being set on the first fetch, should only be set on the change after the first fetch #705 2022-06-23 09:41:55 +02:00
dgtlmoon
5bfdb28bd2 Update README.md 2022-06-16 11:02:22 +02:00
dgtlmoon
31a6a6717b Improve docker-compose.yml browserless docker container example, add env var for STEALTH and BLOCK_ADS (#701) 2022-06-15 23:50:48 +02:00
dgtlmoon
7da32f9ac3 New filter - Block change-detection if text matches - for example, block change-detection while the text "out of stock" is on the page, know when the text is no longer on the page (#698) 2022-06-15 22:59:37 +02:00
dgtlmoon
bb732d3d2e Docker containers - :latest is now stable release, :dev is now master/nightly 2022-06-15 22:59:21 +02:00
dgtlmoon
485e55f9ed Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-06-15 19:12:34 +02:00
dgtlmoon
601a20ea49 Trigger filters improvement- it's possible some changes weren't getting detected because the previous checksum only recorded when an event occurred (#697) 2022-06-15 19:11:20 +02:00
dgtlmoon
76996b9eb8 Some changes werent getting triggered because the previous checksum only recorded when an event occured 2022-06-15 17:18:46 +02:00
dgtlmoon
fba2b1a39d Notifications regression bug in 0.39.15 - only sent the first notification URL 2022-06-15 17:05:03 +02:00
dgtlmoon
4a91505af5 Playwright screenshots - no need for high-res "bug workaround" screenshot, use lower quality/faster configurable image quality env var 2022-06-15 10:52:24 +02:00
dgtlmoon
4841c79b4c Adding extra check when updating DB on ReplyWithContentButNoText 2022-06-14 19:54:35 +02:00
dgtlmoon
2ba00d2e1d Notifications log - log what was sent after applying all cleanups 2022-06-14 17:01:13 +02:00
dgtlmoon
19c96f4bdd Re #555 - tgram:// notifications - strip added HTML tag which is not supported by Telegram 2022-06-14 12:00:21 +02:00
dgtlmoon
82b900fbf4 Give more helpful error message when a page doesnt load 2022-06-14 08:16:22 +02:00
dgtlmoon
358a365303 Tweaks to playwright fetch code - better timeout handling 2022-06-13 23:39:43 +02:00
dgtlmoon
a07ca4b136 Re #580 - New functionality - Random "jitter" delay to requests (#681) 2022-06-13 12:41:53 +02:00
dgtlmoon
ba8cf2c8cf 0.39.15 2022-06-12 14:05:34 +02:00
dgtlmoon
3106b6688e Watch overview list - adding spinner to make it easier to see whats currently being 'Checked' 2022-06-12 12:52:17 +02:00
dgtlmoon
2c83845dac Preview section - add helpful check 2022-06-12 11:10:06 +02:00
dgtlmoon
111266d6fa Send test notification - improved handling of errors 2022-06-12 10:47:00 +02:00
dgtlmoon
ead610151f Notification log - also log normal requests and make the log easier to find 2022-06-11 23:07:09 +02:00
dgtlmoon
7e1e763989 Update bug_report.md 2022-06-11 00:43:28 +02:00
dgtlmoon
327cc4af34 Use correct RSS CDATA handling (#662) 2022-06-08 18:40:01 +02:00
dgtlmoon
6008ff516e Improve logging (#671) 2022-06-08 18:32:41 +02:00
dgtlmoon
cdcf4b353f New [scrub] button when editing a watch - scrub single watch history (#672) 2022-06-08 18:32:25 +02:00
dgtlmoon
1ab70f8e86 Diff + Preview - Hide date selector widget when viewing screenshots as its not yet possible to compare screenshots (but will be soon!) 2022-06-07 19:53:13 +02:00
dgtlmoon
8227c012a7 Diff + Preview - Fixing screenshot behaviour after preference change 2022-06-07 19:51:17 +02:00
dgtlmoon
c113d5fb24 Screenshot handling on the diff/preview section refactor (#630) 2022-06-07 19:22:42 +02:00
dgtlmoon
8c8d4066d7 Shared watches - include "Extract text" filter 2022-06-07 17:06:05 +02:00
dgtlmoon
277dc9e1c1 Improve error message when filter not found in page result (#666) 2022-06-07 16:43:57 +02:00
dgtlmoon
fc0fd1ce9d "Extract text" filter - improve placeholder example 2022-06-06 18:26:47 +02:00
dgtlmoon
bd6127728a Visual selector - 'clear selection' button should clear the filter also 2022-06-06 17:07:29 +02:00
dgtlmoon
4101ae00c6 New feature - "Extract text" filter ability (#624) 2022-06-06 16:57:50 +02:00
dgtlmoon
62f14df3cb Fixing RSS feed HTML content formatting (#662) 2022-06-06 10:24:39 +02:00
Fuzzy
560d465c59 Update notification library - Improving telegram support 2022-06-06 10:07:50 +02:00
dgtlmoon
7929aeddfc 'Mark all viewed' button was missing in this version, added test also. (#652) 2022-06-02 10:01:03 +02:00
dgtlmoon
8294519f43 Content fetcher - Handle when a page doesnt load properly 2022-06-01 13:12:37 +02:00
dgtlmoon
8ba8a220b6 Playwright - Correctly close browser context/sessions on exceptions 2022-06-01 12:59:44 +02:00
dgtlmoon
aa3c8a9370 Move history data to a textfile, improves memory handling (#638) 2022-05-31 23:43:50 +02:00
dgtlmoon
dbb5468cdc Update feature_request.md 2022-05-31 22:07:22 +02:00
dgtlmoon
329c7620fb Remove UK Covid news 2022-05-31 22:04:35 +02:00
Amos (LFlare) Ng
1f974bfbb0 Visual Selector fix: Firefox compatibility - Visual Selector (#646) 2022-05-31 09:04:01 +02:00
Tim Loderhose
437c8525af Remove group tag arbitrary length limit (#645) 2022-05-30 18:28:53 +02:00
dgtlmoon
a2a1d5ae90 Distill.io import bug fix when no tags assigned to a watch (#557) 2022-05-29 22:04:23 +02:00
dgtlmoon
2566de2aae Ignore whitespace on by default 2022-05-28 13:30:57 +02:00
dgtlmoon
dfec8dbb39 Visual Selector - clear events when changing tabs 2022-05-25 15:47:30 +02:00
dgtlmoon
5cefb16e52 Minor code cleanup 2022-05-25 15:38:40 +02:00
dgtlmoon
341ae24b73 Re #616 - content trigger - adding extra test (#620) 2022-05-25 15:31:59 +02:00
dgtlmoon
f47c2fb7f6 README.md update Visual Selector tool - tidy up screenshots, improve text 2022-05-25 11:44:59 +02:00
dgtlmoon
9d742446ab Playwright - ByPass CSP for more reliable JS scraping, disable accept downloads 2022-05-25 11:05:18 +02:00
dgtlmoon
e3e022b0f4 VisualSelector - Better handling of filter targets that are no longer available in the HTML 2022-05-25 10:23:43 +02:00
dgtlmoon
6de4027c27 Update bug_report.md 2022-05-24 14:13:11 +02:00
dgtlmoon
cda3837355 pip build fix - include API module 2022-05-24 00:16:50 +02:00
dgtlmoon
7983675325 Visual Selector - be more resilient when sites interfere with the xPath scraping 2022-05-24 00:10:38 +02:00
dgtlmoon
eef56e52c6 Adding new Visual Selector for choosing the area of the webpage to monitor - playwright/browserless only (#566) 2022-05-23 23:44:51 +02:00
dgtlmoon
8e3195f394 0.39.14 2022-05-23 14:40:26 +02:00
dgtlmoon
e17c2121f7 Fix encoding errors with XPath filters from UTF-8 responses (#619) 2022-05-20 18:07:08 +02:00
dgtlmoon
07e279b38d API Interface (#617) 2022-05-20 16:27:51 +02:00
dgtlmoon
2c834cfe37 Add note that changedetection is not performed on the screenshot just yet (WIP https://github.com/dgtlmoon/changedetection.io/pull/419 ) 2022-05-20 12:52:41 +02:00
dgtlmoon
dbb5c666f0 Fixing edit template HTML 2022-05-18 14:09:39 +02:00
dgtlmoon
70b3493866 Proxy settings on watch should have a "[ ] default" option (#610) 2022-05-18 13:59:54 +02:00
dgtlmoon
3b11c474d1 Input field tidyup (#611) 2022-05-18 13:59:17 +02:00
dgtlmoon
890e1e6dcd Update wiki link for 'More info' about sharing a watch and its configuration 2022-05-17 22:44:36 +02:00
dgtlmoon
6734fb91a2 Option to control if pages with no renderable content are a change (example: JS webapps that dont render any text sometimes) (#608) 2022-05-17 22:22:00 +02:00
dgtlmoon
16809b48f8 Playwright - raise EmptyReply on empty reply, no need to process further 2022-05-17 18:40:15 +02:00
dgtlmoon
67c833d2bc Re #214 - configurable wait extra seconds for webdriver requests before extracting text (#606) 2022-05-17 18:35:33 +02:00
weeix
31fea55ee4 Fix PLAYWRIGHT_DRIVER_URL default value (cf. #587) (#599) 2022-05-14 22:34:44 +02:00
dgtlmoon
b6c50d3b1a Update PIP readme.md 2022-05-10 22:46:59 +02:00
dgtlmoon
034507f14f Fixing Pip install problem - Update MANIFEST to include model/ subdir, improving imports (#593) 2022-05-10 22:45:08 +02:00
dgtlmoon
0e385b1c22 0.39.13 2022-05-10 17:24:38 +02:00
dgtlmoon
f28c260576 Distill.io JSON export file importer (#592) 2022-05-10 17:15:41 +02:00
dgtlmoon
18f0b63b7d Ability to specify a list of proxies to choose from, always using the first one by default, See wiki (#591) 2022-05-08 20:35:36 +02:00
Thilo-Alexander Ginkel
97045e7a7b Improving Playwright docs (#588) 2022-05-07 22:23:17 +02:00
dgtlmoon
9807cf0cda Playwright - code fix 2022-05-07 17:29:59 +02:00
dgtlmoon
d4b5237103 Playwright fetcher - more reliable by just waiting arbitrary seconds after the last network IO 2022-05-07 17:14:40 +02:00
dgtlmoon
dc6f76ba64 Make proxy configuration more consistent - see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration (#585) 2022-05-07 16:37:56 +02:00
dgtlmoon
1f2f93184e Playwright fetcher - use the correct default User-Agent 2022-05-06 23:59:38 +02:00
dgtlmoon
0f08c8dda3 Toggle visibility of extra requests options/settings when not in use (#584) 2022-05-06 23:40:32 +02:00
dgtlmoon
68db20168e Add new fetch method: Playwright Chromium (Selenium/WebDriver alternative) (#489) 2022-05-02 21:40:40 +02:00
dgtlmoon
1d4474f5a3 Simplify scrub operation (simply cleans all) (#575) 2022-05-02 21:10:23 +02:00
dgtlmoon
613308881c Bugfix - dont update record when deleted during check 2022-05-01 21:41:29 +02:00
dgtlmoon
f69585b276 Improving support info in README.md 2022-04-29 20:26:02 +02:00
dgtlmoon
0179940df1 Handle deletions better (#570) 2022-04-29 19:12:33 +02:00
dgtlmoon
c0d0424e7e Data storage bug fix #569 2022-04-29 18:26:15 +02:00
dgtlmoon
014dc61222 Upgrade notifications library - fixing marketup in email subject 2022-04-29 09:39:40 +02:00
dgtlmoon
06517bfd22 Ability to 'Share' a watch by a generated link, this will include all filters and triggers - see Wiki (#563) 2022-04-26 10:52:08 +02:00
dgtlmoon
b3a115dd4a Upgrade notifications library Re #555 - fixing telegram HTML markup in notification title 2022-04-25 23:12:32 +02:00
dgtlmoon
ffc4215411 Unify MINIMUM_SECONDS_RECHECK_TIME env var variable to 60 seconds 2022-04-24 20:37:30 +02:00
dgtlmoon
9e708810d1 Seconds/minutes/hours/days between checks form field upgrade from 'minutes' only (#512) 2022-04-24 16:56:32 +02:00
dgtlmoon
1e8aa6158b Form styling improvements 2022-04-24 14:40:53 +02:00
dgtlmoon
015353eccc Form field handling improvements - fixing field list handler for empty lines 2022-04-24 13:53:13 +02:00
dgtlmoon
501183e66b Fix "Add email" button in main global notification settings 2022-04-22 10:51:52 +02:00
dgtlmoon
def74f27e6 Test notification button fixed in main settings (#556) 2022-04-21 21:36:10 +02:00
dgtlmoon
37775a46c6 tgram:// be sure total notification size is always under their 4096 size limit 2022-04-21 16:28:15 +02:00
dgtlmoon
e4eaa0c817 Shows which items are already in the queue, disables adding to the queue if already in the recheck queue (#552) 2022-04-21 12:52:45 +02:00
dgtlmoon
206ded4201 Notifications - Signal API support, Ntfy support, hotmail, matrix, Gotify API fixes 2022-04-20 23:13:55 +02:00
dgtlmoon
9e71f2aa35 Discord:// notification size limit - also includes the notification title 2022-04-20 17:00:21 +02:00
dgtlmoon
f9594aeffb Fix spelling errors 2022-04-20 09:51:53 +02:00
dgtlmoon
b4e1353376 Update README.md 2022-04-19 23:56:11 +02:00
dgtlmoon
5b670c38d3 Update README.md 2022-04-19 23:48:21 +02:00
dgtlmoon
2a9fb12451 Import speed improvements, and adding an import URL batch size of 5,000 to stop accidental CPU overload (#549) 2022-04-19 23:15:32 +02:00
dgtlmoon
6c3c5dc28a Ability to set the default fetch mode via the DEFAULT_FETCH_BACKEND variable 2022-04-19 23:15:00 +02:00
dgtlmoon
8f062bfec9 Refactor form handling (#548) 2022-04-19 21:43:07 +02:00
dgtlmoon
380c512cc2 Adding support for change detection of HTML source-code via "source:https://website.com" prefix (#540) 2022-04-12 17:36:29 +02:00
dgtlmoon
d7ed7c44ed Re-label the quick-add widget placeholder 'tag' to 'watch group' 2022-04-12 10:55:43 +02:00
dgtlmoon
34a87c0f41 HTTP Fetcher code improvements 2022-04-12 08:36:08 +02:00
dgtlmoon
4074fe53f1 Adding RSS metadata auto-discovery 2022-04-12 07:35:47 +02:00
Tristan Hill
44d599d0d1 Upgrade WTforms form handler to v3 (#523) 2022-04-09 19:50:56 +02:00
dgtlmoon
615fe9290a 0.39.12 2022-04-09 14:16:30 +02:00
dgtlmoon
2cc6955bc3 Miscellaneous settings form visual improvements (#535) 2022-04-09 12:15:34 +02:00
dgtlmoon
9809af142d Option to render links as [Some Text ](/link), adds the ability to change-detect on hyperlink changes 2022-04-09 10:35:14 +02:00
dgtlmoon
1890881977 Specify our Discord avatar_url as default avatar_url 2022-04-08 18:35:59 +02:00
dgtlmoon
9fc2fe85d5 Minor git updates 2022-04-08 17:21:42 +02:00
dgtlmoon
bb3c546838 Fix screenshot tab name 2022-04-08 17:08:06 +02:00
dgtlmoon
165f794595 Discord:// notifications should be cut to 2000 chars or Discord will not process them. (#531 + #323) 2022-04-08 16:32:04 +02:00
dgtlmoon
a440eece9e Make long reports in the notification error log easier to read 2022-04-08 14:12:42 +02:00
dgtlmoon
34c83f0e7c [Add email] button in notification settings with a prefix set from NOTIFICATION_MAIL_BUTTON_PREFIX env variable when defined. (#528) 2022-04-07 18:18:23 +02:00
dgtlmoon
f6e518497a Update README.md 2022-04-07 14:59:31 +02:00
dgtlmoon
63e91a3d66 Skip processing a watch into the RSS feed if there's not enough data to examine (fixes Internal Server Error when accessing the RSS feed) (#521) 2022-04-05 20:31:31 +02:00
dgtlmoon
3034d047c2 Introduce an AJAX button for sending test notifications instead of the checkbox (#519) 2022-04-05 18:04:26 +02:00
dgtlmoon
2620818ba7 Make text tab always available at default 2022-04-02 14:55:40 +02:00
dgtlmoon
9fe4f95990 When fetching a snapshot via Chrome, make the most recent screenshot available on the Diff and Preview pages (#516) 2022-04-02 14:49:32 +02:00
dgtlmoon
ffd2a89d60 Remove 'unviewed' status in watch table when Diff link clicked (#514) 2022-03-31 11:01:07 +02:00
dgtlmoon
8f40f19328 RSS feed CDATA should contain difference output 2022-03-30 10:51:10 +02:00
dgtlmoon
082634f851 Fix - {diff} and {diff_full} notifications tokens were not always including the full output 2022-03-29 19:18:26 +02:00
dgtlmoon
334010025f Update README.md 2022-03-26 14:02:56 +01:00
dgtlmoon
81aa8fa16b Update README.md 2022-03-26 09:56:56 +01:00
dgtlmoon
c79d6824e3 Minor UI cleanups (mobile tabs, font sizing) (#503) 2022-03-25 23:37:28 +01:00
zznidar
946377d2be Fix typo in Filters & Triggers settings. (#495) 2022-03-23 23:18:04 +01:00
zznidar
5db9a30ad4 Add autofocus attribute to password login field (#496) 2022-03-23 23:17:47 +01:00
dgtlmoon
1d060225e1 0.39.11 2022-03-23 09:42:51 +01:00
dgtlmoon
7e0f0d0fd8 Microsoft Windows installation fixes (#492) 2022-03-22 23:08:08 +01:00
dgtlmoon
8b2afa2220 GitHub tweak - container tags should be CSV list (Fix ghcr.io not building) 2022-03-22 00:08:05 +01:00
dgtlmoon
f55ffa0f62 GitHub tweak - build containers also on push to master 2022-03-21 23:08:17 +01:00
dgtlmoon
942c3f021f Allow changedetector to ignore status codes as a per-site setting (#479) (#485)
Co-authored-by: Ara Hayrabedian <ara.hayrabedian@gmail.com>
2022-03-21 23:03:54 +01:00
dgtlmoon
5483f5d694 Security update - Use CSRF token protection for forms, make "remove password" use HTTP Post (#484) 2022-03-21 22:54:27 +01:00
dgtlmoon
f2fa638480 Security update - Protect against file:/// type access by webdriver/chrome. (#483) 2022-03-21 20:59:20 +01:00
dgtlmoon
82d1a7f73e Only build container on GitHub releases, not tests 2022-03-20 16:57:36 +01:00
dgtlmoon
9fc291fb63 Also change container names to help stop some DNS issues 2022-03-17 19:59:37 +01:00
dgtlmoon
3e8a15456a Detect byte-encoding when the server mishandles the content-type header reply (#472) 2022-03-17 10:28:02 +01:00
dgtlmoon
2a03f3f57e Improving form/edit example markup 2022-03-13 12:00:45 +01:00
dgtlmoon
ffad5cca97 JSON diff/preview should use utf-8 encoding where possible (#465) 2022-03-13 11:37:51 +01:00
Tim Loderhose
60a9a786e0 Fix typo in settings form 2022-03-13 10:55:37 +01:00
dgtlmoon
165e950e55 Add python venv to .gitignore 2022-03-13 10:53:33 +01:00
dgtlmoon
c25294ca57 0.39.10 2022-03-12 17:28:30 +01:00
Tim Loderhose
d4359c2e67 Add filter to remove elements by CSS rule from HTML before change detection is run (#445) 2022-03-12 13:29:30 +01:00
dgtlmoon
44fc804991 Minor updates to filters form text 2022-03-12 11:20:43 +01:00
dgtlmoon
b72c9eaf62 Re #448 - Dont use changedetection.io as the container name and hostname, fix problems fetching from the real changedetection.io webserver :) 2022-03-12 08:24:51 +01:00
dgtlmoon
7ce9e4dfc2 Testing - Refactor HTTP Request Type test (#453) 2022-03-11 18:50:02 +01:00
dgtlmoon
3cc6586695 Make table header font size the same as content 2022-03-07 13:03:59 +01:00
dgtlmoon
09204cb43f Adjust background colours 2022-03-06 19:03:59 +01:00
dgtlmoon
a709122874 Handle the case where the visitor is already logged-in and tries to login again (#447) 2022-03-06 18:19:05 +01:00
dgtlmoon
efbeaf9535 Make the Request Override settings easier to understand 2022-03-06 17:23:21 +01:00
dgtlmoon
1a19fba07d Minor tweak to notification token table 2022-03-06 17:10:30 +01:00
dgtlmoon
eb9020c175 Style tweak to watch form 2022-03-06 17:05:23 +01:00
dgtlmoon
13bb44e4f8 Login form style fixes 2022-03-06 17:03:15 +01:00
dgtlmoon
47f294c23b Upgrade apprise notification engine to 0.9.7 (important telegram fixes) 2022-03-05 13:14:14 +01:00
dgtlmoon
a4cce16188 Remove pytest from production release pip requirements 2022-03-05 13:12:15 +01:00
dgtlmoon
69aec23d1d Style fix for background image relative to X-Forwarded-Prefix when running via reverse proxy subdirectory 2022-03-05 13:08:57 +01:00
dgtlmoon
f85ccffe0a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-03-04 13:13:54 +01:00
dgtlmoon
0005131472 Re-arranging primary links so the important ones are easier to find on mobile 2022-03-04 13:06:39 +01:00
dgtlmoon
3be1f4ea44 Set authentication cookie path relative to X-Forwarded-Prefix when running via reverse proxy subdirectory (#446) 2022-03-04 11:23:32 +01:00
dgtlmoon
46c72a7fb3 Upgrade inscriptis HTML converter to version 2.2~ (#434) 2022-03-01 17:58:54 +01:00
dgtlmoon
96664ffb10 Better text/plain detection and refactor tests (#443) 2022-03-01 17:50:15 +01:00
dgtlmoon
615fa2c5b2 Tweak support tabs and text (#440) 2022-02-28 22:39:32 +01:00
dgtlmoon
fd45fcce2f Include link to changedetection.io hosted option (#439) 2022-02-28 15:47:59 +01:00
dgtlmoon
75ca7ec504 Improved CPU usage around the loop responsible for what sites needs to be checked 2022-02-28 15:08:51 +01:00
dgtlmoon
8b1e9f6591 Update README.md with hosting options 2022-02-26 18:42:54 +01:00
dgtlmoon
883aa968fd 0.39.9 2022-02-24 17:02:50 +01:00
dgtlmoon
3240ed2339 Minor reliability upgrade for large datasets - retry deepcopy (#436) 2022-02-24 16:58:51 +01:00
dgtlmoon
a89ffffc76 "Recheck" button should work when entry is in paused state 2022-02-24 16:49:48 +01:00
dgtlmoon
fda93c3798 Better file exception handling on saving index JSON 2022-02-24 16:36:24 +01:00
dgtlmoon
a51c555964 Fix small issue in highlight trigger/ignore preview page with setting the background colours, add test 2022-02-23 12:30:36 +01:00
dgtlmoon
b401998030 Ensure string matching on the ignore filter is always case-INsensitive 2022-02-23 12:01:11 +01:00
dgtlmoon
014fda9058 Ability to visualise trigger and filter rules against the current snapshot on the preview page 2022-02-23 10:49:25 +01:00
dgtlmoon
dd384619e0 Update README.md 2022-02-19 13:41:54 +01:00
Michael
85715120e2 XPath RegularExpression support 2022-02-19 13:40:57 +01:00
dgtlmoon
a0e4f9b88a better checking of JSON type 2022-02-17 18:16:47 +01:00
dgtlmoon
04bef6091e Make system level errors from the HTTP fetchers easier to find (#421) 2022-02-13 23:43:45 +01:00
dependabot[bot]
536948c8c6 Bump node-sass from 6.0.1 to 7.0.0 in /changedetectionio/static/styles (#415)
Bumps [node-sass](https://github.com/sass/node-sass) from 6.0.1 to 7.0.0.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-11 09:10:55 +01:00
dgtlmoon
d4f4ab306a Dont allow redirect on login, it's safer and more reliable this way (#414) 2022-02-08 21:12:44 +01:00
dgtlmoon
8d2e240a2a When using Env. FETCH_WORKERS or WEBDRIVER_DELAY_BEFORE_CONTENT_READY , it should be type int 2022-02-08 20:01:24 +01:00
dgtlmoon
d7ed479ca2 0.39.8 2022-02-08 18:56:10 +01:00
dgtlmoon
f25cdf0a67 Number of fetching workers can be overriden by Env "FETCH_WORKERS" (#413) 2022-02-08 18:27:56 +01:00
dgtlmoon
5214a7e0f3 Adding Env var "WEBDRIVER_DELAY_BEFORE_CONTENT_READY" to wait n seconds before extracting the text from the browser 2022-02-08 18:24:25 +01:00
dgtlmoon
eb3dca3805 Language fix "watches are rechecking." it actually puts them into an internal queue "watches are QUEUED for rechecking" 2022-02-08 13:00:18 +01:00
dgtlmoon
a580c238b6 Use flask url_for() for webdriver chrome icon instead of relative path 2022-02-05 23:25:57 +01:00
Alexander Aleksandrovič Klimov
7ca89f5ec3 Fix typo in the startup create-directory command suggestion (#405) 2022-02-05 19:46:02 +01:00
Alexander Aleksandrovič Klimov
8ab8aaa6ae Introduce -h option to allow listening not on 0.0.0.0. (#406) 2022-02-05 19:29:22 +01:00
dgtlmoon
22ef9afb93 Refactor tests for notification error log handler (#404) 2022-02-04 20:54:20 +01:00
dgtlmoon
abaec224f6 Notification error log handler (#403)
* Add a notifications debug/error log interface (Link available under the notification URLs list)
2022-02-04 19:29:39 +01:00
dgtlmoon
5a645fb74d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-02-04 17:31:54 +01:00
dgtlmoon
14db60e518 Add notification note - tgram:// bots cant send messages to other bots, so you should specify chat ID of non-bot user. 2022-02-04 17:31:32 +01:00
Radu Ursache
e250c552d0 fixed the reference to wiki for rpi section (#402) 2022-02-04 10:55:30 +01:00
dgtlmoon
8e54a17e14 /preview format doesnt need <pre> - fixing too many returnlines in content on diff/preview page 2022-02-02 14:39:42 +01:00
dgtlmoon
8607eccaad Update README.md 2022-02-02 11:33:22 +01:00
dgtlmoon
17511d0d7d Update README - Fix docker section 2022-01-30 15:20:26 +01:00
dgtlmoon
41b806228c Update README - Tidy up sections 2022-01-30 15:19:21 +01:00
dgtlmoon
453cf81e1d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-30 02:15:15 +01:00
dgtlmoon
0095b28ea3 Offer instance on Lemonade
Tidy README
2022-01-30 02:14:32 +01:00
dgtlmoon
73101a47e7 Ability to use a generated salted password in deployments as env var SALTED_PASS (#397)
* Ability to use a generated salted password in deployments as env var SALTED_PASS
2022-01-29 19:36:44 +01:00
dgtlmoon
03f776ca45 #323 Adding note about discord:// 2000 char limit (#392)
* Adding note about discord:// 2000 char limit
2022-01-28 10:38:04 +01:00
dgtlmoon
39b7be9e7a plaintext mime type fix - Don't attempt to extract HTML content from plaintext, this will remove lines and break changedetection (#391) 2022-01-27 23:16:50 +01:00
dgtlmoon
6611823962 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-27 23:01:17 +01:00
dgtlmoon
c1c453e4fe .add_watch() can accept empty tag
Use https://changedetection.io/CHANGELOG.txt as a nice default page to watch
2022-01-27 23:00:39 +01:00
Tim Loderhose
4887180671 Add option for tags on import (#377)
* Add option for tags on import and backup
2022-01-25 18:46:05 +01:00
dgtlmoon
ac7378b7fb Update CONTRIBUTING.md 2022-01-24 22:09:14 +01:00
dgtlmoon
eeba8c864d Update README.md 2022-01-22 15:35:07 +01:00
Travis Howse
abe88192f4 Fix bug where diff and diff_full were switched in notification templates. (#380) 2022-01-21 12:26:08 +01:00
dgtlmoon
af8efbb6d2 Closes #378 2022-01-19 23:16:49 +01:00
dgtlmoon
bbc2875ef3 0.39.7 2022-01-15 23:21:06 +01:00
dgtlmoon
b7ca10ebac Scrub watch snapshot fixes 2022-01-15 23:18:04 +01:00
dgtlmoon
a896493797 Simple HTTP auth (#372)
HTTP Basic Auth form validation
2022-01-15 22:52:39 +01:00
dgtlmoon
e5fe095f16 Adding note about JS pages 2022-01-12 18:18:40 +01:00
dgtlmoon
271181968f Notification settings defaults and validation (#361)
* Re #360 - Validate that when a notification URL is set, we have also a notification body and title, new install should have notification title/body defaults set.
2022-01-10 17:38:04 +01:00
dgtlmoon
8206383ee5 Filters settings helper text tidy-up 2022-01-09 14:36:07 +01:00
dgtlmoon
ecfc02ba23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-09 11:45:23 +01:00
dgtlmoon
3331ccd061 Add test for low-level network error text handling 2022-01-09 11:45:04 +01:00
Unpublished
bd8f389a65 Add API endpoint for current snapshot (#359) 2022-01-08 16:38:42 +01:00
dgtlmoon
bc74227635 Clarify notice/messages around changing ignore text 2022-01-05 20:42:45 +01:00
dgtlmoon
07c60a6acc Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-05 19:13:42 +01:00
dgtlmoon
7916faf58b 0.39.6 2022-01-05 19:13:36 +01:00
dgtlmoon
febb2bbf0d Heroku tweaks (backup download) (#356)
* use absolute path, just incase the data-dir is set relative
2022-01-05 19:12:13 +01:00
dgtlmoon
59d31bf76f XPath support (#355)
* XPath support and minor improvements to form validation
2022-01-05 17:58:07 +01:00
dgtlmoon
f87f7077a6 Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error (#354)
* Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error, adds test
2022-01-05 14:13:30 +01:00
revilo951
f166ab1e30 Adding note in comments for working arm64 chrome with rPi-4 (#336) 2022-01-05 12:20:56 +01:00
Valtteri Huuskonen
55e679e973 fix typo in README.md (#350)
Fix spelling of Raspberry Pi.
2022-01-04 10:55:20 +01:00
dgtlmoon
e211ba806f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-03 20:16:51 +01:00
dgtlmoon
b33105d576 Re #348 - Add test for backup, use proper datastore path 2022-01-03 20:16:21 +01:00
dgtlmoon
b73f5a5c88 Update README.md 2022-01-03 18:46:50 +01:00
Unpublished
023951a10e Be sure that documents returned with a application/json header are not parsed with inscriptis (#337)
* Auto-detect JSON by Content-Type header
* Add test to not parse JSON responses with inscriptis
2022-01-02 22:35:33 +01:00
dgtlmoon
fbd9ecab62 Re #340 - snapshot should not be modified by ignore text (#344) 2022-01-02 22:35:04 +01:00
dgtlmoon
b5c1fce136 Re #133 Option for ignoring whitespacing (#345)
* Global setting option to ignore whitespace when detecting a change
2022-01-02 22:28:34 +01:00
dgtlmoon
489671dcca Re #342 notification encoding (#343)
* Re #342 - check for accidental python byte encoding of non-utf8/string, check return type of fetcher and fix encoding of notification content
2022-01-02 14:11:04 +01:00
dgtlmoon
d4dc3466dc Update README.md 2022-01-01 18:11:54 +01:00
dgtlmoon
0439acacbe Adding global ignore text (#339) 2022-01-01 14:53:08 +01:00
dgtlmoon
735fc2ac8e Adding new proxyType to selenium mappings 2021-12-31 10:48:11 +01:00
dgtlmoon
8a825f0055 Use selenium 4.1.0 2021-12-31 10:44:45 +01:00
dgtlmoon
d0ae8b7923 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-31 10:35:47 +01:00
dgtlmoon
a504773941 Bumping selenium version re https://github.com/dgtlmoon/changedetection.io/pull/331#issuecomment-1003323594 2021-12-31 10:35:29 +01:00
Calvin Bui
feb8e6c76c Add socksVersion mapping (#331) 2021-12-31 10:26:38 +01:00
dgtlmoon
a37a5038d8 Fix broken RSS link fields 2021-12-30 00:04:38 +01:00
dgtlmoon
f1933b786c RSS Link links you back to the difference UI/JS page, RSS Description is the page you're watching, and RSS Title is the page you're watching 2021-12-29 23:57:30 +01:00
dgtlmoon
d6a6ef2c1d Unify Filters and Triggers tabs into a single tab 2021-12-29 23:37:04 +01:00
dgtlmoon
cf9554b169 Move 'request type' field to the new 'Requests' tab 2021-12-29 23:31:53 +01:00
dgtlmoon
d602cf4646 Aligning call signatures #325 2021-12-29 23:28:34 +01:00
Simon Caron
dfcae4ee64 Extend Request Parameters to add Body & Method (#325) 2021-12-29 23:18:29 +01:00
dgtlmoon
e3bcd8c9bf Update README.md 2021-12-29 08:55:37 +01:00
dgtlmoon
c4990fa3f9 Create CONTRIBUTING.md 2021-12-28 18:59:43 +01:00
dgtlmoon
98461d813e Update README.md 2021-12-28 18:57:39 +01:00
dgtlmoon
8ec17a4c83 Re #267 - Pass settings for the proxy setup for webdriver (#326)
* Re #267 - Pass HTTP_PROXY as the proxy setup for webdriver
* Update README.md
2021-12-28 17:07:41 +01:00
dgtlmoon
ee708cc395 Update README.md 2021-12-28 13:19:24 +01:00
dgtlmoon
8a670c029a Update README.md 2021-12-28 13:18:44 +01:00
dgtlmoon
9fa5aec01e Update README.md 2021-12-28 00:47:00 +01:00
dgtlmoon
43c9cb8b0c 0.39.5 2021-12-27 23:46:29 +01:00
dgtlmoon
b6a359d55b Update feature_request.md 2021-12-27 13:50:38 +01:00
dgtlmoon
ae5a88beea Update issue templates 2021-12-27 13:49:07 +01:00
dgtlmoon
a899d338e9 Update bug_report.md 2021-12-27 13:48:02 +01:00
dgtlmoon
7975e8ec2e Update issue templates 2021-12-27 13:46:41 +01:00
dgtlmoon
ce383bcd04 W3C HTML validation issue around RSS icon 2021-12-27 10:55:43 +01:00
dgtlmoon
0b0cdb101b Closes #323 adds link to wiki 2021-12-27 10:14:40 +01:00
dgtlmoon
396509bae8 Update README.md 2021-12-22 10:43:22 +01:00
dgtlmoon
2973f40035 Update README.md 2021-12-22 10:42:48 +01:00
dgtlmoon
067fac862c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 23:17:48 +01:00
dgtlmoon
20647ea319 improve theming docs 2021-12-19 23:17:24 +01:00
dgtlmoon
fafc7fda62 Update README.md 2021-12-19 23:10:55 +01:00
dgtlmoon
b1aaf9f277 Update README.md 2021-12-19 23:04:56 +01:00
dgtlmoon
18987aeb23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 18:17:37 +01:00
dgtlmoon
856789a9ba Closes #315 - Include library apprise Notify_mqtt 2021-12-19 18:16:51 +01:00
Iván
2857c7bb77 Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites (#312)
* set SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites

* Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
2021-12-16 12:13:47 +01:00
dgtlmoon
df951637c4 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-16 11:53:39 +01:00
dgtlmoon
ba6fe076bb Go back to docker hub 2021-12-16 11:53:28 +01:00
dgtlmoon
9815fc2526 RSS allow access via token (#310)
Allow access via a token
* New RSS URL
* Redirect the old RSS feed URL
* fix tests
2021-12-16 00:05:01 +01:00
dgtlmoon
e71dbbe771 Adding deploy to Heroku button 2021-12-15 23:32:48 +01:00
dgtlmoon
bd222c99c6 Adding heroku app.json app 2021-12-15 23:28:23 +01:00
dgtlmoon
4b002ad9e0 Tweak runtime Heroku version 2021-12-15 23:20:21 +01:00
dgtlmoon
fe2ffd6356 Tweaking heroku Procfile 2021-12-15 23:20:06 +01:00
dgtlmoon
266bebb5bc Adjust buildpacks on Heroku 2021-12-15 23:15:36 +01:00
dgtlmoon
115ff5bc2e Adding heroku python3 runtime config 2021-12-15 23:13:03 +01:00
dgtlmoon
dd6a24d337 Try simpler heroku recipe 2021-12-15 23:09:43 +01:00
dgtlmoon
f0d418d58c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-15 23:07:32 +01:00
dgtlmoon
10d3b09051 -C option to create a datadir if it doesnt exist 2021-12-15 23:07:13 +01:00
dgtlmoon
35d0c74454 Re #308 - Adding test and including settings in clone operation (#309) 2021-12-15 19:54:30 +01:00
Glassed Silver
dd450b81ad fixing too small font in diff UI (#260)
* fixing too small font in diff UI , lower size from 12 to 11 in Part II
2021-12-15 19:21:25 +01:00
dgtlmoon
512d76c52b Update README.md
Make link more accurate
2021-12-10 20:21:27 +01:00
dgtlmoon
5a10acfd09 Send diff in notifications (#296) 2021-12-10 12:08:51 +01:00
dgtlmoon
a7c09c8990 Fix scrub form theme 2021-12-10 00:09:54 +01:00
dgtlmoon
9235eae608 Scrub dates: Fix date regex limit handler parsing 2021-12-10 00:09:42 +01:00
dgtlmoon
5bbd82be79 Wait 60 seconds or until stop_thread is set 2021-12-09 23:28:17 +01:00
dgtlmoon
7f8c0fb2fa Check that a notification URL is set when sending the test notification (#300) 2021-12-08 12:23:48 +01:00
Tristan Hill
489eedf34e Flask 2 (#299)
Co-authored-by: Tristan Hill <t+git@eaux.uk>
2021-12-07 23:23:23 +01:00
dgtlmoon
3956b3fd68 Re #269 - Show current/correct BASE_URL information (#271)
* Re #269 - Show current/correct BASE_URL information
2021-12-04 15:23:23 +01:00
dgtlmoon
61c1d213d0 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-04 14:48:18 +01:00
dgtlmoon
e07f573f64 Re #269 - Fix env var comment name 2021-12-04 14:47:46 +01:00
ghjklw
ecba130fdb Enable Markdown and HTML notifications. (#288)
This change enable defining the notification body as HTML or Markdown. This can be very
useful to have more user-friendly notifications such as:
* applying a heading style to the `{watch_title}` to make it stand out
* creating clickable links using the `{watch_url}`, `{preview_url}` and `{diff_url}`.

Changes
=======
* Add a `notification_format` to the notification settings, defaults to plain text.
* Use the `body_format` parameter of Apprise's `notify` method.

Co-authored-by: Malo Jaffré <malo.jaffre@dunnhumby.com>
2021-12-04 14:41:48 +01:00
dgtlmoon
ff6dc842c0 0.39.4 release 2021-12-02 22:54:38 +01:00
dgtlmoon
4659993ecf Re #286 - Solving lost data/corrupted data - Tweak timing and try to write to a temp file first (#292)
* Re #286 - Tweak timing and try to write to a temp file first, Increase logging and format info message better.
2021-12-02 22:48:44 +01:00
jeremysherriff
0a29b3a582 Fix element paths when using reverse proxy subfolder (#272) 2021-11-12 11:34:19 +01:00
dgtlmoon
c55bf418c5 0.39.3 release 2021-10-28 11:32:33 +02:00
dgtlmoon
4bbb7d99b6 Re #264 - fixing clone watch operation 2021-10-28 11:29:59 +02:00
dgtlmoon
a8e92e2226 Re #265 - extended jsonpath support (#266)
* Re #265 - Use extended JSONpath support,
Allow a JSONPath selector to not match anything (yet)
Adding test
Correctly capture invalid JSONPath query error
2021-10-27 09:24:08 +02:00
dgtlmoon
c17327633f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-26 22:32:29 +02:00
dgtlmoon
56d1dde7c3 Re #265 - wasnt catching the jsonpath exception due to invalid jsonpath expressions properly 2021-10-26 22:30:58 +02:00
dgtlmoon
6e4ddacaf8 Re #257 - Handle bool val of json path better (#263)
* Re #257 - Handle bool val of json path better, with test
2021-10-21 23:25:38 +02:00
dgtlmoon
3195ffa1c6 Re #249 - Add EXPOSE 5000 to Dockerfile 2021-10-06 22:28:35 +02:00
dgtlmoon
c749d2ee44 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-06 20:51:38 +02:00
dgtlmoon
ec94359f3c Provide better combination of chardet and urllib3 2021-10-06 20:51:05 +02:00
dgtlmoon
4d0bd58eb1 Prefer GHCR.io over DockerHub (#245)
* Prefer GHCR.io over DockerHub (DockerHub limits pulls)
2021-10-06 13:07:56 +02:00
dgtlmoon
3525f43469 Limit branches/tags of container build
Limit branch
2021-10-06 12:27:02 +02:00
dgtlmoon
d70252c1eb Re #213 - Adding screensize examples to selenium container 2021-10-06 11:34:24 +02:00
dgtlmoon
b57b94c63a Be more specific about tagged release builds 2021-10-06 11:28:39 +02:00
dgtlmoon
9e914c140e Fix :latest release worflow syntax check 2021-10-06 10:27:03 +02:00
dgtlmoon
5d5ceb2f52 Form helper - explain where the webdriver setting comes from 2021-10-06 09:27:41 +02:00
dgtlmoon
bc0303c5da Rename workflow name 2021-10-06 08:59:03 +02:00
dgtlmoon
1240da4a6e Just 'published' and 'edited' package release is enough (remove 'created') 2021-10-06 08:52:10 +02:00
dgtlmoon
4267bda853 Fixing workflow tag syntax issues 2021-10-06 08:49:33 +02:00
dgtlmoon
db1ff1843c fix broken workflow syntax 2021-10-06 08:45:05 +02:00
dgtlmoon
fe3c20b618 add step for metadata debug, see if it runs by checking workflow tag name 2021-10-06 08:42:40 +02:00
dgtlmoon
2fa93cba3a Container build/push doesnt need to be so specific 2021-10-05 22:09:12 +02:00
dgtlmoon
254fbd5a47 Oops on/release was in the wrong block 2021-10-05 19:13:45 +02:00
dgtlmoon
18f2318572 release also on edited, published 2021-10-05 19:05:09 +02:00
dgtlmoon
84417fc2b1 Run workflow on release 2021-10-05 19:02:05 +02:00
dgtlmoon
7f7fc737b3 Use a better switch mechanism for build type 2021-10-05 18:48:54 +02:00
dgtlmoon
2dc43bdfd3 version 0.39.2 2021-10-05 18:21:40 +02:00
dgtlmoon
95e39aa727 Configurable BASE_URL (#228)
Re #152 ability to over-ride env var BASE_URL, with UI+tests
2021-10-05 18:15:36 +02:00
dgtlmoon
2c71f577e0 Split python pip builder to its own release based workflow 2021-10-05 17:01:34 +02:00
dgtlmoon
f987d32c72 remove accidental syntax add 2021-10-05 17:01:26 +02:00
dgtlmoon
cd7df86f54 Re #242 - app was treating notification field defaults as the field value (#244) 2021-10-05 14:33:57 +02:00
dgtlmoon
cb8fa2583a attempt to re-enable docker layer cache 2021-10-05 11:48:09 +02:00
dgtlmoon
3d3e5db81c Forgot GHCR tag with version 2021-10-05 11:43:56 +02:00
dgtlmoon
c9860dc55e Limit container build to releases and master 2021-10-05 11:13:23 +02:00
dgtlmoon
dbd5cf117a Fix GHCR login 2021-10-05 10:47:50 +02:00
dgtlmoon
e805d6ebe3 Use the same workflow for tag and release 2021-10-05 10:40:28 +02:00
dgtlmoon
01f469d91d Drop redundant build workflow 2021-10-05 10:33:15 +02:00
dgtlmoon
e91cab0c6d try :latest and :tag in same workflow run 2021-10-05 10:28:27 +02:00
dgtlmoon
106c3269a6 Separate workflows 2021-10-05 10:16:23 +02:00
dgtlmoon
1628602860 Docker image build issues (#243)
Pin cryptography ~= 3.4, fixes build issues for multiplatform docker buildx, and a little tidy up of github workflows.
2021-10-05 10:09:10 +02:00
dgtlmoon
ca0ab50c5e Re #239 - Individual GUID for watch+changeevent (#241)
* Re #239 - Individual GUID for watch+changeevent
2021-10-04 08:34:10 +02:00
dgtlmoon
df0b7bb0fe Update README.md
Re #240 return update instructions
2021-10-03 19:25:50 +02:00
dgtlmoon
fe59ac4986 Re #232 - Use a copy of the datastore incase it changes while we iterate through it (#234) 2021-09-23 18:27:16 +02:00
dgtlmoon
25476bfcb2 Setting for Extract <title> as title option on individual watches (#229)
* Extract <title> as title option on individual items
2021-09-19 22:57:15 +02:00
dgtlmoon
6901fc493d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-18 10:33:09 +02:00
dgtlmoon
c40417ff96 GitHub repo build platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7 2021-09-18 10:32:45 +02:00
dgtlmoon
fd2d938528 GitHub container repo (#227) 2021-09-18 00:11:54 +02:00
dgtlmoon
cd20dea590 Remove extra build step 2021-09-17 23:50:58 +02:00
dgtlmoon
f921e98265 push github container master also 2021-09-17 23:44:57 +02:00
dgtlmoon
c0e905265c Tidy up workflow names 2021-09-17 23:38:50 +02:00
dgtlmoon
5e6a923c35 Attempt to setup GitHub Container Registry 2021-09-17 23:37:28 +02:00
133 changed files with 9639 additions and 5302 deletions

44
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,44 @@
---
name: Bug report
about: Create a bug report, if you don't follow this template, your report will be DELETED
title: ''
labels: 'triage'
assignees: 'dgtlmoon'
---
**Describe the bug**
A clear and concise description of what the bug is.
**Version**
*Exact version* in the top right area: 0....
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea for this project
title: '[feature]'
labels: 'enhancement'
assignees: ''
---
**Version and OS**
For example, 0.123 on linux/docker
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe the use-case and give concrete real-world examples**
Attach any HTML/JSON, give links to sites, screenshots etc, we are not mind readers
**Additional context**
Add any other context or screenshots about the feature request here.

130
.github/workflows/containers.yml vendored Normal file
View File

@@ -0,0 +1,130 @@
name: Build and push containers
on:
# Automatically triggered by a testing workflow passing, but this is only checked when it lands in the `master`/default branch
# workflow_run:
# workflows: ["ChangeDetection.io Test"]
# branches: [master]
# tags: ['0.*']
# types: [completed]
# Or a new tagged release
release:
types: [published, edited]
push:
branches:
- master
jobs:
metadata:
runs-on: ubuntu-latest
steps:
- name: Show metadata
run: |
echo SHA ${{ github.sha }}
echo github.ref: ${{ github.ref }}
echo github_ref: $GITHUB_REF
echo Event name: ${{ github.event_name }}
echo Ref ${{ github.ref }}
echo c: ${{ github.event.workflow_run.conclusion }}
echo r: ${{ github.event.workflow_run }}
echo tname: "${{ github.event.release.tag_name }}"
echo headbranch: -${{ github.event.workflow_run.head_branch }}-
set
build-push-containers:
runs-on: ubuntu-latest
# If the testing workflow has a success, then we build to :latest
# Or if we are in a tagged release scenario.
if: ${{ github.event.workflow_run.conclusion == 'success' }} || ${{ github.event.release.tag_name }} != ''
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub Container Registry
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
# master branch -> :dev container tag
- name: Build and push :dev
id: docker_build
if: ${{ github.ref }} == "refs/heads/master"
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:dev,ghcr.io/${{ github.repository }}:dev
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
# A new tagged release is required, which builds :tag and :latest
- name: Build and push :tag
id: docker_build_tag_release
if: github.event_name == 'release' && startsWith(github.event.release.tag_name, '0.')
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:${{ github.event.release.tag_name }}
ghcr.io/dgtlmoon/changedetection.io:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:latest
ghcr.io/dgtlmoon/changedetection.io:latest
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-

View File

@@ -1,88 +0,0 @@
name: Javascript/Webdriver support - Test, build and push to Docker Hub :javascript tag
on:
push:
branches: [ javascript-browser ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Test with pytest
run: |
# Each test is totally isolated and performs its own cleanup/reset
cd changedetectionio; ./run_all_tests.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:javascript-dev
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}
# failed: Cache service responded with 503
# - name: Cache Docker layers
# uses: actions/cache@v2
# with:
# path: /tmp/.buildx-cache
# key: ${{ runner.os }}-buildx-${{ github.sha }}
# restore-keys: |
# ${{ runner.os }}-buildx-

View File

@@ -1,92 +0,0 @@
name: Test, build and push tagged release to Docker Hub
on:
push:
tags:
- '*.*'
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- uses: olegtarasov/get-tag@v2.1
id: tagName
# with:
# tagRegex: "foobar-(.*)" # Optional. Returns specified group text as tag name. Full tag string is returned if regex is not defined.
# tagRegexGroup: 1 # Optional. Default is 1.
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Test with pytest
run: |
# Each test is totally isolated and performs its own cleanup/reset
cd changedetectionio; ./run_all_tests.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
- name: tag
run : echo ${{ github.event.release.tag_name }}
- name: Build and push tagged version
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:${{ steps.tagName.outputs.tag }}
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
env:
SOURCE_NAME: ${{ steps.branch_name.outputs.SOURCE_NAME }}
SOURCE_BRANCH: ${{ steps.branch_name.outputs.SOURCE_BRANCH }}
SOURCE_TAG: ${{ steps.branch_name.outputs.SOURCE_TAG }}
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}

View File

@@ -1,88 +0,0 @@
name: Test, build and push to Docker Hub
on:
push:
branches: [ master, arm-build ]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Test with pytest
run: |
# Each test is totally isolated and performs its own cleanup/reset
cd changedetectionio; ./run_all_tests.sh
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to Docker Hub
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
- name: Build and push
id: docker_build
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}
# failed: Cache service responded with 503
# - name: Cache Docker layers
# uses: actions/cache@v2
# with:
# path: /tmp/.buildx-cache
# key: ${{ runner.os }}-buildx-${{ github.sha }}
# restore-keys: |
# ${{ runner.os }}-buildx-

44
.github/workflows/pypi.yml vendored Normal file
View File

@@ -0,0 +1,44 @@
name: PyPi Test and Push tagged release
# Triggers the workflow on push or pull request events
on:
workflow_run:
workflows: ["ChangeDetection.io Test"]
tags: '*.*'
types: [completed]
jobs:
test-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
# - name: Install dependencies
# run: |
# python -m pip install --upgrade pip
# pip install flake8 pytest
# if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
# if [ -f requirements-dev.txt ]; then pip install -r requirements-dev.txt; fi
- name: Test that pip builds without error
run: |
pip3 --version
python3 -m pip install wheel
python3 setup.py bdist_wheel
python3 -m pip install dist/changedetection.io-*-none-any.whl --force
changedetection.io -d /tmp -p 10000 &
sleep 3
curl http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
killall -9 changedetection.io
# https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md ?
# https://github.com/docker/buildx/issues/59 ? Needs to be one platform?
# https://github.com/docker/buildx/issues/495#issuecomment-918925854
#if: ${{ github.event_name == 'release'}}

View File

@@ -4,7 +4,7 @@ name: ChangeDetection.io Test
on: [push, pull_request]
jobs:
build:
test-build:
runs-on: ubuntu-latest
steps:
@@ -14,6 +14,9 @@ jobs:
with:
python-version: 3.9
- name: Show env vars
run: set
- name: Install dependencies
run: |
python -m pip install --upgrade pip
@@ -27,20 +30,16 @@ jobs:
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Unit tests
run: |
python3 -m unittest changedetectionio.tests.unit.test_notification_diff
- name: Test with pytest
run: |
# Each test is totally isolated and performs its own cleanup/reset
cd changedetectionio; ./run_all_tests.sh
- name: Test that pip builds without error
run: |
pip3 --version
python3 -m pip install wheel
python3 setup.py bdist_wheel
python3 -m pip install dist/changedetection.io-*-none-any.whl --force
changedetection.io -d /tmp -p 10000 &
sleep 3
curl http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
killall -9 changedetection.io
# https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md ?
# https://github.com/docker/buildx/issues/59 ? Needs to be one platform?
# https://github.com/docker/buildx/issues/495#issuecomment-918925854

3
.gitignore vendored
View File

@@ -7,4 +7,7 @@ __pycache__
.pytest_cache
build
dist
venv
test-datastore
*.egg-info*
.vscode/settings.json

15
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,15 @@
Contributing is always welcome!
I am no professional flask developer, if you know a better way that something can be done, please let me know!
Otherwise, it's always best to PR into the `dev` branch.
Please be sure that all new functionality has a matching test!
Use `pytest` to validate/test, you can run the existing tests as `pytest tests/test_notifications.py` for example
```
pip3 install -r requirements-dev
```
this is from https://github.com/dgtlmoon/changedetection.io/blob/master/requirements-dev.txt

View File

@@ -20,6 +20,11 @@ COPY requirements.txt /requirements.txt
RUN pip install --target=/dependencies -r /requirements.txt
# Playwright is an alternative to Selenium
# Excluded this package from requirements.txt to prevent arm/v6 and arm/v7 builds from failing
RUN pip install --target=/dependencies playwright~=1.20 \
|| echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled."
# Final image stage
FROM python:3.8-slim
@@ -42,10 +47,15 @@ ENV PYTHONUNBUFFERED=1
RUN [ ! -d "/datastore" ] && mkdir /datastore
# Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
RUN sed -i 's/^CipherString = .*/CipherString = DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf
# Copy modules over to the final image and add their dir to PYTHONPATH
COPY --from=builder /dependencies /usr/local
ENV PYTHONPATH=/usr/local
EXPOSE 5000
# The actual flask app
COPY changedetectionio /app/changedetectionio
# The eventlet server wrapper

View File

@@ -1,4 +1,8 @@
recursive-include changedetectionio/api *
recursive-include changedetectionio/templates *
recursive-include changedetectionio/static *
recursive-include changedetectionio/model *
include changedetection.py
global-exclude *.pyc
global-exclude *.pyc
global-exclude node_modules
global-exclude venv

1
Procfile Normal file
View File

@@ -0,0 +1 @@
web: python3 ./changedetection.py -C -d ./datastore -p $PORT

View File

@@ -16,6 +16,13 @@ Live your data-life *pro-actively* instead of *re-actively*, do not rely on mani
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />
**Get your own private instance now! Let us host it for you!**
[**Try our $6.99/month subscription - unlimited checks, watches and notifications!**](https://lemonade.changedetection.io/start), choose from different geographical locations, let us handle everything for you.
#### Example use cases
Know when ...
@@ -58,14 +65,3 @@ Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
See https://github.com/dgtlmoon/changedetection.io for more information.
### Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Please support us, even small amounts help a LOT.
BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/btc-support.png" style="max-width:50%;" alt="Support us!" />

164
README.md
View File

@@ -1,57 +1,91 @@
# changedetection.io
[![Release Version][release-shield]][release-link] [![Docker Pulls][docker-pulls]][docker-link] [![License][license-shield]](LICENSE.md)
![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master)
<a href="https://hub.docker.com/r/dgtlmoon/changedetection.io" target="_blank" title="Change detection docker hub">
<img src="https://img.shields.io/docker/pulls/dgtlmoon/changedetection.io" alt="Docker Pulls"/>
</a>
<a href="https://hub.docker.com/r/dgtlmoon/changedetection.io" target="_blank" title="Change detection docker hub">
<img src="https://img.shields.io/github/v/release/dgtlmoon/changedetection.io" alt="Change detection latest tag version"/>
</a>
## Self-hosted open source change monitoring of web pages.
## Web Site Change Detection, Monitoring and Notification - Self-Hosted or SaaS.
_Know when web pages change! Stay ontop of new information!_
_Know when web pages change! Stay ontop of new information! get notifications when important website content changes_
Live your data-life *pro-actively* instead of *re-actively*, do not rely on manipulative social media for consuming important information.
Live your data-life *pro-actively* instead of *re-actively*.
Open source web page monitoring, notification and change detection.
Free, Open-source web page monitoring, notification and change detection. Don't have time? [**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start)
[![Discord](https://img.shields.io/badge/DISCORD-%237289DA.svg?style=for-the-badge&logo=discord&logoColor=white)](https://discord.gg/vUNt4EtWMF) [ ![YouTube](https://img.shields.io/badge/YouTube-%23FF0000.svg?style=for-the-badge&logo=YouTube&logoColor=white)](https://www.youtube.com/channel/UCbS09q1TRf0o4N2t-WA3emQ) [![LinkedIn](https://img.shields.io/badge/linkedin-%230077B5.svg?style=for-the-badge&logo=linkedin&logoColor=white)](https://www.linkedin.com/company/changedetection-io/)
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start)
**Get your own private instance now! Let us host it for you!**
[**Try our $6.99/month subscription - unlimited checks and watches!**](https://lemonade.changedetection.io/start) , _half the price of other website change monitoring services and comes with unlimited watches & checks!_
- Automatic Updates, Automatic Backups, No Heroku "paused application", don't miss a change!
- Javascript browser included
- Unlimited checks and watches!
#### Example use cases
Know when ...
- Government department updates (changes are often only on their websites)
- Local government news (changes are often only on their websites)
- Products and services have a change in pricing
- Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list.
- Festivals with changes
- Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites
- University/organisation news from their website
- Detect and monitor changes in JSON API responses
- API monitoring and alerting
- JSON API monitoring and alerting
- Changes in legal and other documents
- Trigger API calls via notifications when text appears on a website
- Glue together APIs using the JSON filter and JSON notifications
- Create RSS feeds based on changes in web content
- Monitor HTML source code for unexpected changes, strengthen your PCI compliance
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver!</a>_
**Get monitoring now! super simple, one command!**
## Screenshots
Run the python code on your own machine by cloning this repository, or with <a href="https://docs.docker.com/get-docker/">docker</a> and/or <a href="https://www.digitalocean.com/community/tutorial_collections/how-to-install-docker-compose">docker-compose</a>
### Examine differences in content.
Easily see what changed, examine by word, line, or individual character.
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
### Filter by elements using the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
## Installation
### Docker
Check out our Docker hub page https://hub.docker.com/r/dgtlmoon/changedetection.io
With Docker composer, just clone this repository and..
With Docker composer, just clone this repository and
```bash
$ docker-compose up -d
```
Docker standalone
```bash
$ docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
`:latest` tag is our latest stable release, `:dev` tag is our bleeding edge `master` branch.
### Windows
See the install instructions at the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Microsoft-Windows
### Python Pip
Check out our pypi page https://pypi.org/project/changedetection.io/
@@ -65,16 +99,31 @@ Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
_Now with per-site configurable support for using a fast built in HTTP fetcher or use a Chrome based fetcher for monitoring of JavaScript websites!_
### Screenshots
## Updating changedetection.io
Examining differences in content.
### Docker
```
docker pull dgtlmoon/changedetection.io
docker kill $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker rm $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />
### docker-compose
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
```bash
docker-compose pull && docker-compose up -d
```
### Notifications
See the wiki for more information https://github.com/dgtlmoon/changedetection.io/wiki
## Filters
XPath, JSONPath and CSS support comes baked in! You can be as specific as you need, use XPath exported from various XPath element query creation tools.
(We support LXML `re:test`, `re:math` and `re:replace`.)
## Notifications
ChangeDetection.io supports a massive amount of notifications (including email, office365, custom APIs, etc) when a web-page has a change detected thanks to the <a href="https://github.com/caronc/apprise">apprise</a> library.
Simply set one or more notification URL's in the _[edit]_ tab of that watch.
@@ -92,23 +141,23 @@ Just some examples
json://someserver.com/custom-api
syslog://
<a href="https://github.com/caronc/apprise">And everything else in this list!</a>
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
Now you can also customise your notification content!
### JSON API Monitoring
## JSON API Monitoring
Detect changes and monitor data in JSON API's by using the built-in JSONPath selectors as a filter / selector.
![image](https://user-images.githubusercontent.com/275001/125165842-0ce01980-e1dc-11eb-9e73-d8137dd162dc.png)
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-filter-field-example.png)
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
![image](https://user-images.githubusercontent.com/275001/125165995-d9ea5580-e1dc-11eb-8030-f0deced2661a.png)
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-diff-example.png)
#### Parse JSON embedded in HTML!
### Parse JSON embedded in HTML!
When you enable a `json:` filter, you can even automatically extract and parse embedded JSON inside a HTML page! Amazingly handy for sites that build content based on JSON, such as many e-commerce websites.
@@ -122,40 +171,37 @@ When you enable a `json:` filter, you can even automatically extract and parse e
`json:$.price` would give `23.50`, or you can extract the whole structure
### Proxy
## Proxy configuration
A proxy for ChangeDetection.io can be configured by setting environment the
`HTTP_PROXY`, `HTTPS_PROXY` variables, examples are also in the `docker-compose.yml`
See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration
`NO_PROXY` exclude list can be specified by following `"localhost,192.168.0.0/24"`
## Raspberry Pi support?
as `docker run` with `-e`
```
docker run -d --restart always -e HTTPS_PROXY="socks5h://10.10.1.10:1080" -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
With `docker-compose`, see the `Proxy support example` in <a href="https://github.com/dgtlmoon/changedetection.io/blob/master/docker-compose.yml">docker-compose.yml</a>.
For more information see https://docs.python-requests.org/en/master/user/advanced/#proxies
This proxy support also extends to the notifications https://github.com/caronc/apprise/issues/387#issuecomment-841718867
Raspberry Pi and linux/arm/v6 linux/arm/v7 arm64 devices are supported! See the wiki for [details](https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver)
### RaspberriPi support?
RaspberriPi and linux/arm/v6 linux/arm/v7 arm64 devices are supported!
### Windows native support?
Sorry not yet :( https://github.com/dgtlmoon/changedetection.io/labels/windows
### Support us
## Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Please support us, even small amounts help a LOT.
BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
Firstly, consider taking out a [change detection monthly subscription - unlimited checks and watches](https://lemonade.changedetection.io/start) , even if you don't use it, you still get the warm fuzzy feeling of helping out the project. (And who knows, you might just use it!)
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/btc-support.png" style="max-width:50%;" alt="Support us!" />
Or directly donate an amount PayPal [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/donate/?hosted_button_id=7CP6HR9ZCNDYJ)
Or BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/btc-support.png" style="max-width:50%;" alt="Support us!" />
## Commercial Support
I offer commercial support, this software is depended on by network security, aerospace , data-science and data-journalist professionals just to name a few, please reach out at dgtlmoon@gmail.com for any enquiries, I am more than glad to work with your organisation to further the possibilities of what can be done with changedetection.io
[release-shield]: https://img.shields.io:/github/v/release/dgtlmoon/changedetection.io?style=for-the-badge
[docker-pulls]: https://img.shields.io/docker/pulls/dgtlmoon/changedetection.io?style=for-the-badge
[test-shield]: https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master
[license-shield]: https://img.shields.io/github/license/dgtlmoon/changedetection.io.svg?style=for-the-badge
[release-link]: https://github.com/dgtlmoon.com/changedetection.io/releases
[docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io

21
app.json Normal file
View File

@@ -0,0 +1,21 @@
{
"name": "ChangeDetection.io",
"description": "The best and simplest self-hosted open source website change detection monitoring and notification service.",
"keywords": [
"changedetection",
"website monitoring"
],
"repository": "https://github.com/dgtlmoon/changedetection.io",
"success_url": "/",
"scripts": {
},
"env": {
},
"formation": {
"web": {
"quantity": 1,
"size": "free"
}
},
"image": "heroku/python"
}

View File

@@ -1,97 +1,11 @@
#!/usr/bin/python3
# Launch as a eventlet.wsgi server instance.
import getopt
import os
import sys
import eventlet
import eventlet.wsgi
import changedetectionio
from changedetectionio import store
def main():
ssl_mode = False
port = os.environ.get('PORT') or 5000
do_cleanup = False
# Must be absolute so that send_from_directory doesnt try to make it relative to backend/
datastore_path = os.path.join(os.getcwd(), "datastore")
try:
opts, args = getopt.getopt(sys.argv[1:], "csd:p:", "port")
except getopt.GetoptError:
print('backend.py -s SSL enable -p [port] -d [datastore path]')
sys.exit(2)
for opt, arg in opts:
# if opt == '--purge':
# Remove history, the actual files you need to delete manually.
# for uuid, watch in datastore.data['watching'].items():
# watch.update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'previous_md5': None})
if opt == '-s':
ssl_mode = True
if opt == '-p':
port = int(arg)
if opt == '-d':
datastore_path = arg
# Cleanup (remove text files that arent in the index)
if opt == '-c':
do_cleanup = True
# isnt there some @thingy to attach to each route to tell it, that this route needs a datastore
app_config = {'datastore_path': datastore_path}
if not os.path.isdir(app_config['datastore_path']):
print ("ERROR: Directory path for the datastore '{}' does not exist, cannot start, please make sure the directory exists.\n"
"Alternatively, use the -d parameter.".format(app_config['datastore_path']),file=sys.stderr)
sys.exit(2)
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=changedetectionio.__version__)
app = changedetectionio.changedetection_app(app_config, datastore)
# Go into cleanup mode
if do_cleanup:
datastore.remove_unused_snapshots()
app.config['datastore_path'] = datastore_path
@app.context_processor
def inject_version():
return dict(right_sticky="v{}".format(datastore.data['version_tag']),
new_version_available=app.config['NEW_VERSION_AVAILABLE'],
has_password=datastore.data['settings']['application']['password'] != False
)
# Proxy sub-directory support
# Set environment var USE_X_SETTINGS=1 on this script
# And then in your proxy_pass settings
#
# proxy_set_header Host "localhost";
# proxy_set_header X-Forwarded-Prefix /app;
if os.getenv('USE_X_SETTINGS'):
print ("USE_X_SETTINGS is ENABLED\n")
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1)
if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', port)),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen(('', int(port))), app)
# Entry-point for running from the CLI when not installed via Pip, Pip will handle the console_scripts entry_points's from setup.py
# It's recommended to use `pip3 install changedetection.io` and start with `changedetection.py` instead, it will be linkd to your global path.
# or Docker.
# Read more https://github.com/dgtlmoon/changedetection.io/wiki
from changedetectionio import changedetection
if __name__ == '__main__':
main()
changedetection.main()

2
changedetectionio/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
test-datastore
package-lock.json

File diff suppressed because it is too large Load Diff

View File

View File

@@ -0,0 +1,124 @@
from flask_restful import abort, Resource
from flask import request, make_response
import validators
from . import auth
# https://www.w3.org/Protocols/rfc2616/rfc2616-sec10.html
class Watch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
# Get information about a single watch, excluding the history list (can be large)
# curl http://localhost:4000/api/v1/watch/<string:uuid>
# ?recheck=true
@auth.check_token
def get(self, uuid):
from copy import deepcopy
watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.args.get('recheck'):
self.update_q.put(uuid)
return "OK", 200
# Return without history, get that via another API call
watch['history_n'] = watch.history_n
return watch
@auth.check_token
def delete(self, uuid):
if not self.datastore.data['watching'].get(uuid):
abort(400, message='No watch exists with the UUID of {}'.format(uuid))
self.datastore.delete(uuid)
return 'OK', 204
class WatchHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
# Get a list of available history for a watch by UUID
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history
def get(self, uuid):
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
return watch.history, 200
class WatchSingleHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
# Read a given history snapshot and return its content
# <string:timestamp> or "latest"
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history/<int:timestamp>
@auth.check_token
def get(self, uuid, timestamp):
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if not len(watch.history):
abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid))
if timestamp == 'latest':
timestamp = list(watch.history.keys())[-1]
with open(watch.history[timestamp], 'r') as f:
content = f.read()
response = make_response(content, 200)
response.mimetype = "text/plain"
return response
class CreateWatch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
@auth.check_token
def post(self):
# curl http://localhost:4000/api/v1/watch -H "Content-Type: application/json" -d '{"url": "https://my-nice.com", "tag": "one, two" }'
json_data = request.get_json()
tag = json_data['tag'].strip() if json_data.get('tag') else ''
if not validators.url(json_data['url'].strip()):
return "Invalid or unsupported URL", 400
extras = {'title': json_data['title'].strip()} if json_data.get('title') else {}
new_uuid = self.datastore.add_watch(url=json_data['url'].strip(), tag=tag, extras=extras)
self.update_q.put(new_uuid)
return {'uuid': new_uuid}, 201
# Return concise list of available watches and some very basic info
# curl http://localhost:4000/api/v1/watch|python -mjson.tool
# ?recheck_all=1 to recheck all
@auth.check_token
def get(self):
list = {}
for k, v in self.datastore.data['watching'].items():
list[k] = {'url': v['url'],
'title': v['title'],
'last_checked': v['last_checked'],
'last_changed': v['last_changed'],
'last_error': v['last_error']}
if request.args.get('recheck_all'):
for uuid in self.datastore.data['watching'].keys():
self.update_q.put(uuid)
return {'status': "OK"}, 200
return list, 200

View File

@@ -0,0 +1,33 @@
from flask import request, make_response, jsonify
from functools import wraps
# Simple API auth key comparison
# @todo - Maybe short lived token in the future?
def check_token(f):
@wraps(f)
def decorated(*args, **kwargs):
datastore = args[0].datastore
config_api_token_enabled = datastore.data['settings']['application'].get('api_access_token_enabled')
if not config_api_token_enabled:
return
try:
api_key_header = request.headers['x-api-key']
except KeyError:
return make_response(
jsonify("No authorization x-api-key header."), 403
)
config_api_token = datastore.data['settings']['application'].get('api_access_token')
if api_key_header != config_api_token:
return make_response(
jsonify("Invalid access - API key invalid."), 403
)
return f(*args, **kwargs)
return decorated

View File

@@ -0,0 +1,11 @@
import apprise
# Create our AppriseAsset and populate it with some of our new values:
# https://github.com/caronc/apprise/wiki/Development_API#the-apprise-asset-object
asset = apprise.AppriseAsset(
image_url_logo='https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
)
asset.app_id = "changedetection.io"
asset.app_desc = "ChangeDetection.io best and simplest website monitoring and change detection"
asset.app_url = "https://changedetection.io"

View File

@@ -0,0 +1,114 @@
#!/usr/bin/python3
# Launch as a eventlet.wsgi server instance.
import getopt
import os
import sys
import eventlet
import eventlet.wsgi
from . import store, changedetection_app, content_fetcher
from . import __version__
def main():
ssl_mode = False
host = ''
port = os.environ.get('PORT') or 5000
do_cleanup = False
datastore_path = None
# On Windows, create and use a default path.
if os.name == 'nt':
datastore_path = os.path.expandvars(r'%APPDATA%\changedetection.io')
os.makedirs(datastore_path, exist_ok=True)
else:
# Must be absolute so that send_from_directory doesnt try to make it relative to backend/
datastore_path = os.path.join(os.getcwd(), "../datastore")
try:
opts, args = getopt.getopt(sys.argv[1:], "Ccsd:h:p:", "port")
except getopt.GetoptError:
print('backend.py -s SSL enable -h [host] -p [port] -d [datastore path]')
sys.exit(2)
create_datastore_dir = False
for opt, arg in opts:
# if opt == '--clear-all-history':
# Remove history, the actual files you need to delete manually.
# for uuid, watch in datastore.data['watching'].items():
# watch.update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'previous_md5': None})
if opt == '-s':
ssl_mode = True
if opt == '-h':
host = arg
if opt == '-p':
port = int(arg)
if opt == '-d':
datastore_path = arg
# Cleanup (remove text files that arent in the index)
if opt == '-c':
do_cleanup = True
# Create the datadir if it doesnt exist
if opt == '-C':
create_datastore_dir = True
# isnt there some @thingy to attach to each route to tell it, that this route needs a datastore
app_config = {'datastore_path': datastore_path}
if not os.path.isdir(app_config['datastore_path']):
if create_datastore_dir:
os.mkdir(app_config['datastore_path'])
else:
print(
"ERROR: Directory path for the datastore '{}' does not exist, cannot start, please make sure the directory exists or specify a directory with the -d option.\n"
"Or use the -C parameter to create the directory.".format(app_config['datastore_path']), file=sys.stderr)
sys.exit(2)
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=__version__)
app = changedetection_app(app_config, datastore)
# Go into cleanup mode
if do_cleanup:
datastore.remove_unused_snapshots()
app.config['datastore_path'] = datastore_path
@app.context_processor
def inject_version():
return dict(right_sticky="v{}".format(datastore.data['version_tag']),
new_version_available=app.config['NEW_VERSION_AVAILABLE'],
has_password=datastore.data['settings']['application']['password'] != False
)
# Proxy sub-directory support
# Set environment var USE_X_SETTINGS=1 on this script
# And then in your proxy_pass settings
#
# proxy_set_header Host "localhost";
# proxy_set_header X-Forwarded-Prefix /app;
if os.getenv('USE_X_SETTINGS'):
print ("USE_X_SETTINGS is ENABLED\n")
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1)
if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((host, port)),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen((host, int(port))), app)

View File

@@ -1,31 +1,207 @@
import os
import time
from abc import ABC, abstractmethod
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import WebDriverException
import urllib3.exceptions
import chardet
import json
import os
import requests
import time
import sys
class PageUnloadable(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class EmptyReply(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class ScreenshotUnavailable(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
return
pass
class Fetcher():
error = None
status_code = None
content = None # Should be bytes?
content = None
headers = None
fetcher_description ="No description"
fetcher_description = "No description"
webdriver_js_execute_code = None
xpath_element_js = """
// Include the getXpath script directly, easier than fetching
!function(e,n){"object"==typeof exports&&"undefined"!=typeof module?module.exports=n():"function"==typeof define&&define.amd?define(n):(e=e||self).getXPath=n()}(this,function(){return function(e){var n=e;if(n&&n.id)return'//*[@id="'+n.id+'"]';for(var o=[];n&&Node.ELEMENT_NODE===n.nodeType;){for(var i=0,r=!1,d=n.previousSibling;d;)d.nodeType!==Node.DOCUMENT_TYPE_NODE&&d.nodeName===n.nodeName&&i++,d=d.previousSibling;for(d=n.nextSibling;d;){if(d.nodeName===n.nodeName){r=!0;break}d=d.nextSibling}o.push((n.prefix?n.prefix+":":"")+n.localName+(i||r?"["+(i+1)+"]":"")),n=n.parentNode}return o.length?"/"+o.reverse().join("/"):""}});
const findUpTag = (el) => {
let r = el
chained_css = [];
depth=0;
// Strategy 1: Keep going up until we hit an ID tag, imagine it's like #list-widget div h4
while (r.parentNode) {
if(depth==5) {
break;
}
if('' !==r.id) {
chained_css.unshift("#"+CSS.escape(r.id));
final_selector= chained_css.join(' > ');
// Be sure theres only one, some sites have multiples of the same ID tag :-(
if (window.document.querySelectorAll(final_selector).length ==1 ) {
return final_selector;
}
return null;
} else {
chained_css.unshift(r.tagName.toLowerCase());
}
r=r.parentNode;
depth+=1;
}
return null;
}
// @todo - if it's SVG or IMG, go into image diff mode
var elements = window.document.querySelectorAll("div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary");
var size_pos=[];
// after page fetch, inject this JS
// build a map of all elements and their positions (maybe that only include text?)
var bbox;
for (var i = 0; i < elements.length; i++) {
bbox = elements[i].getBoundingClientRect();
// forget really small ones
if (bbox['width'] <20 && bbox['height'] < 20 ) {
continue;
}
// @todo the getXpath kind of sucks, it doesnt know when there is for example just one ID sometimes
// it should not traverse when we know we can anchor off just an ID one level up etc..
// maybe, get current class or id, keep traversing up looking for only class or id until there is just one match
// 1st primitive - if it has class, try joining it all and select, if theres only one.. well thats us.
xpath_result=false;
try {
var d= findUpTag(elements[i]);
if (d) {
xpath_result =d;
}
} catch (e) {
console.log(e);
}
// You could swap it and default to getXpath and then try the smarter one
// default back to the less intelligent one
if (!xpath_result) {
try {
// I've seen on FB and eBay that this doesnt work
// ReferenceError: getXPath is not defined at eval (eval at evaluate (:152:29), <anonymous>:67:20) at UtilityScript.evaluate (<anonymous>:159:18) at UtilityScript.<anonymous> (<anonymous>:1:44)
xpath_result = getXPath(elements[i]);
} catch (e) {
console.log(e);
continue;
}
}
if(window.getComputedStyle(elements[i]).visibility === "hidden") {
continue;
}
size_pos.push({
xpath: xpath_result,
width: Math.round(bbox['width']),
height: Math.round(bbox['height']),
left: Math.floor(bbox['left']),
top: Math.floor(bbox['top']),
childCount: elements[i].childElementCount
});
}
// inject the current one set in the css_filter, which may be a CSS rule
// used for displaying the current one in VisualSelector, where its not one we generated.
if (css_filter.length) {
q=false;
try {
// is it xpath?
if (css_filter.startsWith('/') || css_filter.startsWith('xpath:')) {
q=document.evaluate(css_filter.replace('xpath:',''), document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
} else {
q=document.querySelector(css_filter);
}
} catch (e) {
// Maybe catch DOMException and alert?
console.log(e);
}
bbox=false;
if(q) {
bbox = q.getBoundingClientRect();
}
if (bbox && bbox['width'] >0 && bbox['height']>0) {
size_pos.push({
xpath: css_filter,
width: bbox['width'],
height: bbox['height'],
left: bbox['left'],
top: bbox['top'],
childCount: q.childElementCount
});
}
}
// Window.width required for proper scaling in the frontend
return {'size_pos':size_pos, 'browser_width': window.innerWidth};
"""
xpath_data = None
# Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False
system_http_proxy = os.getenv('HTTP_PROXY')
system_https_proxy = os.getenv('HTTPS_PROXY')
# Time ONTOP of the system defined env minimum time
render_extract_delay=0
@abstractmethod
def get_error(self):
return self.error
@abstractmethod
def run(self, url, timeout, request_headers):
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_css_filter=None):
# Should set self.error, self.status_code and self.content
pass
@abstractmethod
def quit(self):
return
@abstractmethod
def get_last_status_code(self):
return self.status_code
@@ -35,29 +211,179 @@ class Fetcher():
def is_ready(self):
return True
# Maybe for the future, each fetcher provides its own diff output, could be used for text, image
# the current one would return javascript output (as we use JS to generate the diff)
#
# Returns tuple(mime_type, stream)
# @abstractmethod
# def return_diff(self, stream_a, stream_b):
# return
def available_fetchers():
import inspect
from changedetectionio import content_fetcher
p=[]
for name, obj in inspect.getmembers(content_fetcher):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if "html_" in name:
t=tuple([name,obj.fetcher_description])
p.append(t)
# See the if statement at the bottom of this file for how we switch between playwright and webdriver
import inspect
p = []
for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if name.startswith('html_'):
t = tuple([name, obj.fetcher_description])
p.append(t)
return p
return p
class html_webdriver(Fetcher):
class base_html_playwright(Fetcher):
fetcher_description = "Playwright {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
)
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
fetcher_description += " via '{}'".format(os.getenv("PLAYWRIGHT_DRIVER_URL"))
browser_type = ''
command_executor = ''
# Configs for Proxy setup
# In the ENV vars, is prefixed with "playwright_proxy_", so it is for example "playwright_proxy_server"
playwright_proxy_settings_mappings = ['bypass', 'server', 'username', 'password']
proxy = None
def __init__(self, proxy_override=None):
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
self.command_executor = os.getenv(
"PLAYWRIGHT_DRIVER_URL",
'ws://playwright-chrome:3000'
).strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.playwright_proxy_settings_mappings:
v = os.getenv('playwright_proxy_' + k, False)
if v:
proxy_args[k] = v.strip('"')
if proxy_args:
self.proxy = proxy_args
# allow per-watch proxy selection override
if proxy_override:
self.proxy = {'server': proxy_override}
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_css_filter=None):
from playwright.sync_api import sync_playwright
import playwright._impl._api_types
from playwright._impl._api_types import Error, TimeoutError
response = None
with sync_playwright() as p:
browser_type = getattr(p, self.browser_type)
# Seemed to cause a connection Exception even tho I can see it connect
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
# 60,000 connection timeout only
browser = browser_type.connect_over_cdp(self.command_executor, timeout=60000)
# Set user agent to prevent Cloudflare from blocking the browser
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context(
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
proxy=self.proxy,
# This is needed to enable JavaScript execution on GitHub and others
bypass_csp=True,
# Should never be needed
accept_downloads=False
)
if len(request_headers):
context.set_extra_http_headers(request_headers)
page = context.new_page()
try:
page.set_default_navigation_timeout(90000)
page.set_default_timeout(90000)
# Listen for all console events and handle errors
page.on("console", lambda msg: print(f"Playwright console: Watch URL: {url} {msg.type}: {msg.text} {msg.args}"))
# Bug - never set viewport size BEFORE page.goto
# Waits for the next navigation. Using Python context manager
# prevents a race condition between clicking and waiting for a navigation.
with page.expect_navigation():
response = page.goto(url, wait_until='load')
if self.webdriver_js_execute_code is not None:
page.evaluate(self.webdriver_js_execute_code)
except playwright._impl._api_types.TimeoutError as e:
context.close()
browser.close()
# This can be ok, we will try to grab what we could retrieve
pass
except Exception as e:
print ("other exception when page.goto")
print (str(e))
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None)
if response is None:
context.close()
browser.close()
print ("response object was none")
raise EmptyReply(url=url, status_code=None)
# Bug 2(?) Set the viewport size AFTER loading the page
page.set_viewport_size({"width": 1280, "height": 1024})
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
time.sleep(extra_wait)
self.content = page.content()
self.status_code = response.status
if len(self.content.strip()) == 0:
context.close()
browser.close()
print ("Content was empty")
raise EmptyReply(url=url, status_code=None)
self.headers = response.all_headers()
if current_css_filter is not None:
page.evaluate("var css_filter={}".format(json.dumps(current_css_filter)))
else:
page.evaluate("var css_filter=''")
self.xpath_data = page.evaluate("async () => {" + self.xpath_element_js + "}")
# Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large
# Screenshots also travel via the ws:// (websocket) meaning that the binary data is base64 encoded
# which will significantly increase the IO size between the server and client, it's recommended to use the lowest
# acceptable screenshot quality here
try:
# Quality set to 1 because it's not used, just used as a work-around for a bug, no need to change this.
page.screenshot(type='jpeg', clip={'x': 1.0, 'y': 1.0, 'width': 1280, 'height': 1024}, quality=1)
# The actual screenshot
self.screenshot = page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72)))
except Exception as e:
context.close()
browser.close()
raise ScreenshotUnavailable(url=url, status_code=None)
context.close()
browser.close()
class base_html_webdriver(Fetcher):
if os.getenv("WEBDRIVER_URL"):
fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL"))
else:
@@ -65,67 +391,203 @@ class html_webdriver(Fetcher):
command_executor = ''
def __init__(self):
self.command_executor = os.getenv("WEBDRIVER_URL", 'http://browser-chrome:4444/wd/hub')
# Configs for Proxy setup
# In the ENV vars, is prefixed with "webdriver_", so it is for example "webdriver_sslProxy"
selenium_proxy_settings_mappings = ['proxyType', 'ftpProxy', 'httpProxy', 'noProxy',
'proxyAutoconfigUrl', 'sslProxy', 'autodetect',
'socksProxy', 'socksVersion', 'socksUsername', 'socksPassword']
proxy = None
def run(self, url, timeout, request_headers):
def __init__(self, proxy_override=None):
from selenium.webdriver.common.proxy import Proxy as SeleniumProxy
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.command_executor = os.getenv("WEBDRIVER_URL", 'http://browser-chrome:4444/wd/hub').strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.selenium_proxy_settings_mappings:
v = os.getenv('webdriver_' + k, False)
if v:
proxy_args[k] = v.strip('"')
# Map back standard HTTP_ and HTTPS_PROXY to webDriver httpProxy/sslProxy
if not proxy_args.get('webdriver_httpProxy') and self.system_http_proxy:
proxy_args['httpProxy'] = self.system_http_proxy
if not proxy_args.get('webdriver_sslProxy') and self.system_https_proxy:
proxy_args['httpsProxy'] = self.system_https_proxy
# Allows override the proxy on a per-request basis
if proxy_override is not None:
proxy_args['httpProxy'] = proxy_override
if proxy_args:
self.proxy = SeleniumProxy(raw=proxy_args)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_css_filter=None):
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import WebDriverException
# request_body, request_method unused for now, until some magic in the future happens.
# check env for WEBDRIVER_URL
driver = webdriver.Remote(
self.driver = webdriver.Remote(
command_executor=self.command_executor,
desired_capabilities=DesiredCapabilities.CHROME)
desired_capabilities=DesiredCapabilities.CHROME,
proxy=self.proxy)
try:
driver.get(url)
self.driver.get(url)
except WebDriverException as e:
# Be sure we close the session window
driver.quit()
self.quit()
raise
self.driver.set_window_size(1280, 1024)
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
if self.webdriver_js_execute_code is not None:
self.driver.execute_script(self.webdriver_js_execute_code)
# Selenium doesn't automatically wait for actions as good as Playwright, so wait again
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
self.screenshot = self.driver.get_screenshot_as_png()
# @todo - how to check this? is it possible?
self.status_code = 200
# @todo somehow we should try to get this working for WebDriver
# raise EmptyReply(url=url, status_code=r.status_code)
# @todo - dom wait loaded?
time.sleep(5)
self.content = driver.page_source
driver.quit()
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
self.content = self.driver.page_source
self.headers = {}
# Does the connection to the webdriver work? run a test connection.
def is_ready(self):
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import WebDriverException
driver = webdriver.Remote(
self.driver = webdriver.Remote(
command_executor=self.command_executor,
desired_capabilities=DesiredCapabilities.CHROME)
# driver.quit() seems to cause better exceptions
driver.quit()
self.quit()
return True
def quit(self):
if self.driver:
try:
self.driver.quit()
except Exception as e:
print("Exception in chrome shutdown/quit" + str(e))
# "html_requests" is listed as the default fetcher in store.py!
class html_requests(Fetcher):
fetcher_description = "Basic fast Plaintext/HTTP Client"
def run(self, url, timeout, request_headers):
import requests
def __init__(self, proxy_override=None):
self.proxy_override = proxy_override
r = requests.get(url,
headers=request_headers,
timeout=timeout,
verify=False)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_css_filter=None):
html = r.text
proxies={}
# Allows override the proxy on a per-request basis
if self.proxy_override:
proxies = {'http': self.proxy_override, 'https': self.proxy_override, 'ftp': self.proxy_override}
else:
if self.system_http_proxy:
proxies['http'] = self.system_http_proxy
if self.system_https_proxy:
proxies['https'] = self.system_https_proxy
r = requests.request(method=request_method,
data=request_body,
url=url,
headers=request_headers,
timeout=timeout,
proxies=proxies,
verify=False)
# If the response did not tell us what encoding format to expect, Then use chardet to override what `requests` thinks.
# For example - some sites don't tell us it's utf-8, but return utf-8 content
# This seems to not occur when using webdriver/selenium, it seems to detect the text encoding more reliably.
# https://github.com/psf/requests/issues/1604 good info about requests encoding detection
if not r.headers.get('content-type') or not 'charset=' in r.headers.get('content-type'):
encoding = chardet.detect(r.content)['encoding']
if encoding:
r.encoding = encoding
# @todo test this
if not r or not html or not len(html):
raise EmptyReply(url)
# @todo maybe you really want to test zero-byte return pages?
if (not ignore_status_codes and not r) or not r.content or not len(r.content):
raise EmptyReply(url=url, status_code=r.status_code)
self.status_code = r.status_code
self.content = html
self.content = r.text
self.headers = r.headers
# "html_requests" is listed as the default fetcher in store.py!
class html_fetcher_with_weird_memory_leak(Fetcher):
fetcher_description = "HTTP Fetcher with unexplainable memory leak"
def __init__(self, proxy_override=None):
self.proxy_override = proxy_override
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_css_filter=None):
self.status_code = 200
# Does nothing to help
# with open('memory-leak.html', 'r', encoding="utf-8") as f:
# with open('memory-leak.html', 'r') as f:
# Works but is binary (no good for me)
with open('memory-leak.html', 'r') as f:
wtf = f.read()
# just to prove gc.collect doesnt help, i dont even use 'wtf'
del wtf
wtf="not much"
import gc
gc.collect()
self.content = "<html>foobar</html>"
self.headers = {}
self.xpath_data = '{}'
# Decide which is the 'real' HTML webdriver, this is more a system wide config
# rather than site-specific.
use_playwright_as_chrome_fetcher = os.getenv('PLAYWRIGHT_DRIVER_URL', False)
if use_playwright_as_chrome_fetcher:
html_webdriver = base_html_playwright
else:
html_webdriver = base_html_webdriver

52
changedetectionio/diff.py Normal file
View File

@@ -0,0 +1,52 @@
# used for the notifications, the front-end is using a JS library
import difflib
def same_slicer(l, a, b):
if a == b:
return [l[a]]
else:
return l[a:b]
# like .compare but a little different output
def customSequenceMatcher(before, after, include_equal=False):
cruncher = difflib.SequenceMatcher(isjunk=lambda x: x in " \\t", a=before, b=after)
# @todo Line-by-line mode instead of buncghed, including `after` that is not in `before` (maybe unset?)
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
if include_equal and tag == 'equal':
g = before[alo:ahi]
yield g
elif tag == 'delete':
g = ["(removed) " + i for i in same_slicer(before, alo, ahi)]
yield g
elif tag == 'replace':
g = ["(changed) " + i for i in same_slicer(before, alo, ahi)]
g += ["(into ) " + i for i in same_slicer(after, blo, bhi)]
yield g
elif tag == 'insert':
g = ["(added ) " + i for i in same_slicer(after, blo, bhi)]
yield g
# only_differences - only return info about the differences, no context
# line_feed_sep could be "<br/>" or "<li>" or "\n" etc
def render_diff(previous_file, newest_file, include_equal=False, line_feed_sep="\n"):
with open(newest_file, 'r') as f:
newest_version_file_contents = f.read()
newest_version_file_contents = [line.rstrip() for line in newest_version_file_contents.splitlines()]
if previous_file:
with open(previous_file, 'r') as f:
previous_version_file_contents = f.read()
previous_version_file_contents = [line.rstrip() for line in previous_version_file_contents.splitlines()]
else:
previous_version_file_contents = ""
rendered_diff = customSequenceMatcher(previous_version_file_contents,
newest_version_file_contents,
include_equal)
# Recursively join lists
f = lambda L: line_feed_sep.join([f(x) if type(x) is list else x for x in L])
return f(rendered_diff)

View File

@@ -1,67 +1,82 @@
import time
from changedetectionio import content_fetcher
import hashlib
from inscriptis import get_text
import urllib3
from . import html_tools
import logging
import os
import re
import time
import urllib3
from changedetectionio import content_fetcher, html_tools
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Some common stuff here that can be moved to a base class
# (set_proxy_from_list)
class perform_site_check():
def __init__(self, *args, datastore, **kwargs):
super().__init__(*args, **kwargs)
self.datastore = datastore
def strip_ignore_text(self, content, list_ignore_text):
import re
ignore = []
ignore_regex = []
for k in list_ignore_text:
# If there was a proxy list enabled, figure out what proxy_args/which proxy to use
# if watch.proxy use that
# fetcher.proxy_override = watch.proxy or main config proxy
# Allows override the proxy on a per-request basis
# ALWAYS use the first one is nothing selected
# Is it a regex?
if k[0] == '/':
ignore_regex.append(k.strip(" /"))
else:
ignore.append(k)
def set_proxy_from_list(self, watch):
proxy_args = None
if self.datastore.proxy_list is None:
return None
output = []
for line in content.splitlines():
# If its a valid one
if any([watch['proxy'] in p for p in self.datastore.proxy_list]):
proxy_args = watch['proxy']
# Always ignore blank lines in this mode. (when this function gets called)
if len(line.strip()):
regex_matches = False
# not valid (including None), try the system one
else:
system_proxy = self.datastore.data['settings']['requests']['proxy']
# Is not None and exists
if any([system_proxy in p for p in self.datastore.proxy_list]):
proxy_args = system_proxy
# if any of these match, skip
for regex in ignore_regex:
try:
if re.search(regex, line, re.IGNORECASE):
regex_matches = True
except Exception as e:
continue
# Fallback - Did not resolve anything, use the first available
if proxy_args is None:
proxy_args = self.datastore.proxy_list[0][0]
if not regex_matches and not any(skip_text in line for skip_text in ignore):
output.append(line.encode('utf8'))
return proxy_args
return "\n".encode('utf8').join(output)
# Doesn't look like python supports forward slash auto enclosure in re.findall
# So convert it to inline flag "foobar(?i)" type configuration
def forward_slash_enclosed_regex_to_options(self, regex):
res = re.search(r'^/(.*?)/(\w+)$', regex, re.IGNORECASE)
if res:
regex = res.group(1)
regex += '(?{})'.format(res.group(2))
else:
regex += '(?{})'.format('i')
return regex
def run(self, uuid):
timestamp = int(time.time()) # used for storage etc too
changed_detected = False
screenshot = False # as bytes
stripped_text_from_html = ""
watch = self.datastore.data['watching'][uuid]
update_obj = {'previous_md5': self.datastore.data['watching'][uuid]['previous_md5'],
'history': {},
"last_checked": timestamp
}
# Protect against file:// access
if re.search(r'^file', watch['url'], re.IGNORECASE) and not os.getenv('ALLOW_FILE_URI', False):
raise Exception(
"file:// type access is denied for security reasons."
)
# Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False}
extra_headers = self.datastore.get_val(uuid, 'headers')
@@ -75,104 +90,226 @@ class perform_site_check():
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
# @todo check the failures are really handled how we expect
timeout = self.datastore.data['settings']['requests']['timeout']
url = self.datastore.get_val(uuid, 'url')
request_body = self.datastore.get_val(uuid, 'body')
request_method = self.datastore.get_val(uuid, 'method')
ignore_status_code = self.datastore.get_val(uuid, 'ignore_status_codes')
# source: support
is_source = False
if url.startswith('source:'):
url = url.replace('source:', '')
is_source = True
# Pluggable content fetcher
prefer_backend = watch['fetch_backend']
if hasattr(content_fetcher, prefer_backend):
klass = getattr(content_fetcher, prefer_backend)
else:
timeout = self.datastore.data['settings']['requests']['timeout']
url = self.datastore.get_val(uuid, 'url')
# If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests")
# Pluggable content fetcher
prefer_backend = watch['fetch_backend']
if hasattr(content_fetcher, prefer_backend):
klass = getattr(content_fetcher, prefer_backend)
proxy_args = self.set_proxy_from_list(watch)
fetcher = klass(proxy_override=proxy_args)
# Configurable per-watch or global extra delay before extracting text (for webDriver types)
system_webdriver_delay = self.datastore.data['settings']['application'].get('webdriver_delay', None)
if watch['webdriver_delay'] is not None:
fetcher.render_extract_delay = watch['webdriver_delay']
elif system_webdriver_delay is not None:
fetcher.render_extract_delay = system_webdriver_delay
if watch['webdriver_js_execute_code'] is not None and watch['webdriver_js_execute_code'].strip():
fetcher.webdriver_js_execute_code = watch['webdriver_js_execute_code']
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_code, watch['css_filter'])
fetcher.quit()
# Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base?
# @note: I feel like the following should be in a more obvious chain system
# - Check filter text
# - Is the checksum different?
# - Do we convert to JSON?
# https://stackoverflow.com/questions/41817578/basic-method-chaining ?
# return content().textfilter().jsonextract().checksumcompare() ?
is_json = 'application/json' in fetcher.headers.get('Content-Type', '')
is_html = not is_json
# source: support, basically treat it as plaintext
if is_source:
is_html = False
is_json = False
css_filter_rule = watch['css_filter']
subtractive_selectors = watch.get(
"subtractive_selectors", []
) + self.datastore.data["settings"]["application"].get(
"global_subtractive_selectors", []
)
has_filter_rule = css_filter_rule and len(css_filter_rule.strip())
has_subtractive_selectors = subtractive_selectors and len(subtractive_selectors[0].strip())
if is_json and not has_filter_rule:
css_filter_rule = "json:$"
has_filter_rule = True
if has_filter_rule:
if 'json:' in css_filter_rule:
stripped_text_from_html = html_tools.extract_json_as_string(content=fetcher.content, jsonpath_filter=css_filter_rule)
is_html = False
if is_html or is_source:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
fetcher.content = html_tools.workarounds_for_obfuscations(fetcher.content)
html_content = fetcher.content
# If not JSON, and if it's not text/plain..
if 'text/plain' in fetcher.headers.get('Content-Type', '').lower():
# Don't run get_text or xpath/css filters on plaintext
stripped_text_from_html = html_content
else:
# If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests")
fetcher = klass()
fetcher.run(url, timeout, request_headers)
# Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base?
# @note: I feel like the following should be in a more obvious chain system
# - Check filter text
# - Is the checksum different?
# - Do we convert to JSON?
# https://stackoverflow.com/questions/41817578/basic-method-chaining ?
# return content().textfilter().jsonextract().checksumcompare() ?
is_html = True
css_filter_rule = watch['css_filter']
if css_filter_rule and len(css_filter_rule.strip()):
if 'json:' in css_filter_rule:
stripped_text_from_html = html_tools.extract_json_as_string(content=fetcher.content, jsonpath_filter=css_filter_rule)
is_html = False
else:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
stripped_text_from_html = html_tools.css_filter(css_filter=css_filter_rule, html_content=fetcher.content)
if is_html:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
html_content = fetcher.content
if css_filter_rule and len(css_filter_rule.strip()):
html_content = html_tools.css_filter(css_filter=css_filter_rule, html_content=fetcher.content)
# get_text() via inscriptis
stripped_text_from_html = get_text(html_content)
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
update_obj["last_check_status"] = fetcher.get_last_status_code()
update_obj["last_error"] = False
# If there's text to skip
# @todo we could abstract out the get_text() to handle this cleaner
if len(watch['ignore_text']):
stripped_text_from_html = self.strip_ignore_text(stripped_text_from_html, watch['ignore_text'])
else:
stripped_text_from_html = stripped_text_from_html.encode('utf8')
# Then we assume HTML
if has_filter_rule:
# For HTML/XML we offer xpath as an option, just start a regular xPath "/.."
if css_filter_rule[0] == '/' or css_filter_rule.startswith('xpath:'):
html_content = html_tools.xpath_filter(xpath_filter=css_filter_rule.replace('xpath:', ''),
html_content=fetcher.content)
else:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
html_content = html_tools.css_filter(css_filter=css_filter_rule, html_content=fetcher.content)
if has_subtractive_selectors:
html_content = html_tools.element_removal(subtractive_selectors, html_content)
if not is_source:
# extract text
stripped_text_from_html = \
html_tools.html_to_text(
html_content,
render_anchor_tag_content=self.datastore.data["settings"][
"application"].get(
"render_anchor_tag_content", False)
)
elif is_source:
stripped_text_from_html = html_content
# Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# Treat pages with no renderable text content as a change? No by default
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0:
raise content_fetcher.ReplyWithContentButNoText(url=url, status_code=200)
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
update_obj["last_check_status"] = fetcher.get_last_status_code()
# If there's text to skip
# @todo we could abstract out the get_text() to handle this cleaner
text_to_ignore = watch.get('ignore_text', []) + self.datastore.data['settings']['application'].get('global_ignore_text', [])
if len(text_to_ignore):
stripped_text_from_html = html_tools.strip_ignore_text(stripped_text_from_html, text_to_ignore)
else:
stripped_text_from_html = stripped_text_from_html.encode('utf8')
# 615 Extract text by regex
extract_text = watch.get('extract_text', [])
if len(extract_text) > 0:
regex_matched_output = []
for s_re in extract_text:
# incase they specified something in '/.../x'
regex = self.forward_slash_enclosed_regex_to_options(s_re)
result = re.findall(regex.encode('utf-8'), stripped_text_from_html)
for l in result:
if type(l) is tuple:
#@todo - some formatter option default (between groups)
regex_matched_output += list(l) + [b'\n']
else:
# @todo - some formatter option default (between each ungrouped result)
regex_matched_output += [l] + [b'\n']
# Now we will only show what the regex matched
stripped_text_from_html = b''
text_content_before_ignored_filter = b''
if regex_matched_output:
# @todo some formatter for presentation?
stripped_text_from_html = b''.join(regex_matched_output)
text_content_before_ignored_filter = stripped_text_from_html
# Re #133 - if we should strip whitespaces from triggering the change detected comparison
if self.datastore.data['settings']['application'].get('ignore_whitespace', False):
fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest()
else:
fetched_md5 = hashlib.md5(stripped_text_from_html).hexdigest()
blocked_by_not_found_trigger_text = False
############ Blocking rules, after checksum #################
blocked = False
if len(watch['trigger_text']):
blocked_by_not_found_trigger_text = True
for line in watch['trigger_text']:
# Because JSON wont serialize a re.compile object
if line[0] == '/' and line[-1] == '/':
regex = re.compile(line.strip('/'), re.IGNORECASE)
# Found it? so we don't wait for it anymore
r = re.search(regex, str(stripped_text_from_html))
if r:
blocked_by_not_found_trigger_text = False
break
elif line.lower() in str(stripped_text_from_html).lower():
# We found it don't wait for it.
blocked_by_not_found_trigger_text = False
break
if len(watch['trigger_text']):
# Assume blocked
blocked = True
# Filter and trigger works the same, so reuse it
# It should return the line numbers that match
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=watch['trigger_text'],
mode="line numbers")
# Unblock if the trigger was found
if result:
blocked = False
# could be None or False depending on JSON type
# On the first run of a site, watch['previous_md5'] will be an empty string
if not blocked_by_not_found_trigger_text and watch['previous_md5'] != fetched_md5:
changed_detected = True
if len(watch['text_should_not_be_present']):
# If anything matched, then we should block a change from happening
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=watch['text_should_not_be_present'],
mode="line numbers")
if result:
blocked = True
# Don't confuse people by updating as last-changed, when it actually just changed from None..
if self.datastore.get_val(uuid, 'previous_md5'):
update_obj["last_changed"] = timestamp
# The main thing that all this at the moment comes down to :)
if watch['previous_md5'] != fetched_md5:
changed_detected = True
update_obj["previous_md5"] = fetched_md5
# Looks like something changed, but did it match all the rules?
if blocked:
changed_detected = False
# Extract title as title
if is_html and self.datastore.data['settings']['application']['extract_title_as_title']:
# Extract title as title
if is_html:
if self.datastore.data['settings']['application']['extract_title_as_title'] or watch['extract_title_as_title']:
if not watch['title'] or not len(watch['title']):
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
if changed_detected:
if watch.get('check_unique_lines', False):
has_unique_lines = watch.lines_contain_something_unique_compared_to_history(lines=stripped_text_from_html.splitlines())
# One or more lines? unsure?
if not has_unique_lines:
logging.debug("check_unique_lines: UUID {} didnt have anything new setting change_detected=False".format(uuid))
changed_detected = False
else:
logging.debug("check_unique_lines: UUID {} had unique content".format(uuid))
return changed_detected, update_obj, stripped_text_from_html
# Always record the new checksum
update_obj["previous_md5"] = fetched_md5
# On the first run of a site, watch['previous_md5'] will be None, set it the current one.
if not watch.get('previous_md5'):
watch['previous_md5'] = fetched_md5
return changed_detected, update_obj, text_content_before_ignored_filter, fetcher.screenshot, fetcher.xpath_data

View File

@@ -1,39 +1,74 @@
from wtforms import Form, SelectField, RadioField, BooleanField, StringField, PasswordField, validators, IntegerField, fields, TextAreaField, \
Field
from wtforms import widgets
from wtforms.validators import ValidationError
from wtforms.fields import html5
from changedetectionio import content_fetcher
import re
from wtforms import (
BooleanField,
Field,
Form,
IntegerField,
PasswordField,
RadioField,
SelectField,
StringField,
SubmitField,
TextAreaField,
fields,
validators,
widgets,
)
from wtforms.validators import ValidationError
from changedetectionio import content_fetcher
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
valid_notification_formats,
)
from wtforms.fields import FormField
valid_method = {
'GET',
'POST',
'PUT',
'PATCH',
'DELETE',
}
default_method = 'GET'
class StringListField(StringField):
widget = widgets.TextArea()
def _value(self):
if self.data:
return "\r\n".join(self.data)
# ignore empty lines in the storage
data = list(filter(lambda x: len(x.strip()), self.data))
# Apply strip to each line
data = list(map(lambda x: x.strip(), data))
return "\r\n".join(data)
else:
return u''
# incoming
def process_formdata(self, valuelist):
if valuelist:
# Remove empty strings
cleaned = list(filter(None, valuelist[0].split("\n")))
self.data = [x.strip() for x in cleaned]
p = 1
if valuelist and len(valuelist[0].strip()):
# Remove empty strings, stripping and splitting \r\n, only \n etc.
self.data = valuelist[0].splitlines()
# Remove empty lines from the final data
self.data = list(filter(lambda x: len(x.strip()), self.data))
else:
self.data = []
class SaltyPasswordField(StringField):
widget = widgets.PasswordInput()
encrypted_password = ""
def build_password(self, password):
import hashlib
import base64
import hashlib
import secrets
# Make a new salt on every new password and store it with the password
@@ -54,6 +89,13 @@ class SaltyPasswordField(StringField):
else:
self.data = False
class TimeBetweenCheckForm(Form):
weeks = IntegerField('Weeks', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
days = IntegerField('Days', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
hours = IntegerField('Hours', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
minutes = IntegerField('Minutes', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
seconds = IntegerField('Seconds', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
# @todo add total seconds minimum validatior = minimum_seconds_recheck_time
# Separated by key:value
class StringDictKeyValue(StringField):
@@ -91,8 +133,8 @@ class ValidateContentFetcherIsReady(object):
self.message = message
def __call__(self, form, field):
from changedetectionio import content_fetcher
import urllib3.exceptions
from changedetectionio import content_fetcher
# Better would be a radiohandler that keeps a reference to each class
if field.data is not None:
@@ -104,10 +146,12 @@ class ValidateContentFetcherIsReady(object):
except urllib3.exceptions.MaxRetryError as e:
driver_url = some_object.command_executor
message = field.gettext('Content fetcher \'%s\' did not respond.' % (field.data))
message += '<br/>'+field.gettext('Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.')
message += '<br/>' + field.gettext(
'Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.')
message += '<br/>' + field.gettext('Did you follow the instructions in the wiki?')
message += '<br/><br/>' + field.gettext('WebDriver Host: %s' % (driver_url))
message += '<br/><a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">Go here for more information</a>'
message += '<br/>'+field.gettext('Content fetcher did not respond properly, unable to use it.\n %s' % (str(e)))
raise ValidationError(message)
@@ -116,6 +160,21 @@ class ValidateContentFetcherIsReady(object):
raise ValidationError(message % (field.data, e))
class ValidateNotificationBodyAndTitleWhenURLisSet(object):
"""
Validates that they entered something in both notification title+body when the URL is set
Due to https://github.com/dgtlmoon/changedetection.io/issues/360
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
if len(field.data):
if not len(form.notification_title.data) or not len(form.notification_body.data):
message = field.gettext('Notification Body and Title is required when a Notification URL is used')
raise ValidationError(message)
class ValidateAppRiseServers(object):
"""
Validates that each URL given is compatible with AppRise
@@ -147,6 +206,23 @@ class ValidateTokensList(object):
if not p.strip('{}') in notification.valid_tokens:
message = field.gettext('Token \'%s\' is not a valid token.')
raise ValidationError(message % (p))
class validateURL(object):
"""
Flask wtform validators wont work with basic auth
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
import validators
try:
validators.url(field.data.strip())
except validators.ValidationFailure:
message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip()))
raise ValidationError(message)
class ValidateListRegex(object):
"""
@@ -167,62 +243,166 @@ class ValidateListRegex(object):
message = field.gettext('RegEx \'%s\' is not a valid regular expression.')
raise ValidationError(message % (line))
class ValidateCSSJSONInput(object):
class ValidateCSSJSONXPATHInput(object):
"""
Filter validation
@todo CSS validator ;)
"""
def __init__(self, message=None):
def __init__(self, message=None, allow_xpath=True, allow_json=True):
self.message = message
self.allow_xpath = allow_xpath
self.allow_json = allow_json
def __call__(self, form, field):
if 'json:' in field.data:
from jsonpath_ng.exceptions import JsonPathParserError
from jsonpath_ng import jsonpath, parse
input = field.data.replace('json:', '')
if isinstance(field.data, str):
data = [field.data]
else:
data = field.data
try:
parse(input)
except JsonPathParserError as e:
message = field.gettext('\'%s\' is not a valid JSONPath expression. (%s)')
raise ValidationError(message % (input, str(e)))
for line in data:
# Nothing to see here
if not len(line.strip()):
return
# Does it look like XPath?
if line.strip()[0] == '/':
if not self.allow_xpath:
raise ValidationError("XPath not permitted in this field!")
from lxml import etree, html
tree = html.fromstring("<html></html>")
try:
tree.xpath(line.strip())
except etree.XPathEvalError as e:
message = field.gettext('\'%s\' is not a valid XPath expression. (%s)')
raise ValidationError(message % (line, str(e)))
except:
raise ValidationError("A system-error occurred when validating your XPath expression")
if 'json:' in line:
if not self.allow_json:
raise ValidationError("JSONPath not permitted in this field!")
from jsonpath_ng.exceptions import (
JsonPathLexerError,
JsonPathParserError,
)
from jsonpath_ng.ext import parse
input = line.replace('json:', '')
try:
parse(input)
except (JsonPathParserError, JsonPathLexerError) as e:
message = field.gettext('\'%s\' is not a valid JSONPath expression. (%s)')
raise ValidationError(message % (input, str(e)))
except:
raise ValidationError("A system-error occurred when validating your JSONPath expression")
# Re #265 - maybe in the future fetch the page and offer a
# warning/notice that its possible the rule doesnt yet match anything?
class quickWatchForm(Form):
# https://wtforms.readthedocs.io/en/2.3.x/fields/#module-wtforms.fields.html5
# `require_tld` = False is needed even for the test harness "http://localhost:5005.." to run
url = html5.URLField('URL', [validators.URL(require_tld=False)])
tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)])
url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()])
watch_submit_button = SubmitField('Watch', render_kw={"class": "pure-button pure-button-primary"})
edit_and_watch_submit_button = SubmitField('Edit > Watch', render_kw={"class": "pure-button pure-button-primary"})
# Common to a single watch and the global settings
class commonSettingsForm(Form):
notification_urls = StringListField('Notification URL List', validators=[validators.Optional(), ValidateAppRiseServers()])
notification_title = StringField('Notification Title', default='ChangeDetection.io Notification - {watch_url}', validators=[validators.Optional(), ValidateTokensList()])
notification_body = TextAreaField('Notification Body', default='{watch_url} had a change.', validators=[validators.Optional(), ValidateTokensList()])
trigger_check = BooleanField('Send test notification on save')
fetch_backend = RadioField(u'Fetch Method', choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
notification_urls = StringListField('Notification URL list', validators=[validators.Optional(), ValidateNotificationBodyAndTitleWhenURLisSet(), ValidateAppRiseServers()])
notification_title = StringField('Notification title', default=default_notification_title, validators=[validators.Optional(), ValidateTokensList()])
notification_body = TextAreaField('Notification body', default=default_notification_body, validators=[validators.Optional(), ValidateTokensList()])
notification_format = SelectField('Notification format', choices=valid_notification_formats.keys(), default=default_notification_format)
fetch_backend = RadioField(u'Fetch method', choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False)
webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1, message="Should contain one or more seconds")] )
class watchForm(commonSettingsForm):
url = html5.URLField('URL', [validators.URL(require_tld=False)])
tag = StringField('Group tag', [validators.Optional(), validators.Length(max=35)])
url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()], default='')
minutes_between_check = html5.IntegerField('Maximum time in minutes until recheck',
[validators.Optional(), validators.NumberRange(min=1)])
css_filter = StringField('CSS/JSON Filter', [ValidateCSSJSONInput()])
title = StringField('Title')
time_between_check = FormField(TimeBetweenCheckForm)
ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
headers = StringDictKeyValue('Request Headers')
css_filter = StringField('CSS/JSON/XPATH Filter', [ValidateCSSJSONXPATHInput()], default='')
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
extract_text = StringListField('Extract text', [ValidateListRegex()])
title = StringField('Title', default='')
ignore_text = StringListField('Ignore text', [ValidateListRegex()])
headers = StringDictKeyValue('Request headers')
body = TextAreaField('Request body', [validators.Optional()])
method = SelectField('Request method', choices=valid_method, default=default_method)
ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False)
check_unique_lines = BooleanField('Only trigger when new lines appear', default=False)
trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()])
text_should_not_be_present = StringListField('Block change-detection if text matches', [validators.Optional(), ValidateListRegex()])
webdriver_js_execute_code = TextAreaField('Execute JavaScript before change detection', render_kw={"rows": "5"}, validators=[validators.Optional()])
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})
save_and_preview_button = SubmitField('Save & Preview', render_kw={"class": "pure-button pure-button-primary"})
proxy = RadioField('Proxy')
filter_failure_notification_send = BooleanField(
'Send a notification when the filter can no longer be found on the page', default=False)
def validate(self, **kwargs):
if not super().validate():
return False
result = True
# Fail form validation when a body is set for a GET
if self.method.data == 'GET' and self.body.data:
self.body.errors.append('Body must be empty when Request Method is set to GET')
result = False
return result
class globalSettingsForm(commonSettingsForm):
# datastore.data['settings']['requests']..
class globalSettingsRequestForm(Form):
time_between_check = FormField(TimeBetweenCheckForm)
proxy = RadioField('Proxy')
jitter_seconds = IntegerField('Random jitter seconds ± check',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")])
# datastore.data['settings']['application']..
class globalSettingsApplicationForm(commonSettingsForm):
base_url = StringField('Base URL', validators=[validators.Optional()])
global_subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
global_ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
ignore_whitespace = BooleanField('Ignore whitespace')
real_browser_save_screenshot = BooleanField('Save last screenshot when using Chrome?')
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
api_access_token_enabled = BooleanField('API access token security check enabled', default=True, validators=[validators.Optional()])
password = SaltyPasswordField()
minutes_between_check = html5.IntegerField('Maximum time in minutes until recheck',
[validators.NumberRange(min=1)])
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title')
filter_failure_notification_threshold_attempts = IntegerField('Number of times the filter can be missing before sending a notification',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message="Should contain zero or more attempts")])
class globalSettingsForm(Form):
# Define these as FormFields/"sub forms", this way it matches the JSON storage
# datastore.data['settings']['application']..
# datastore.data['settings']['requests']..
requests = FormField(globalSettingsRequestForm)
application = FormField(globalSettingsApplicationForm)
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})

View File

@@ -1,21 +1,69 @@
import json
from bs4 import BeautifulSoup
from jsonpath_ng import parse
from typing import List
from bs4 import BeautifulSoup
from jsonpath_ng.ext import parse
import re
from inscriptis import get_text
from inscriptis.model.config import ParserConfig
class FilterNotFoundInResponse(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
class JSONNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Given a CSS Rule, and a blob of HTML, return the blob of HTML that matches
def css_filter(css_filter, html_content):
soup = BeautifulSoup(html_content, "html.parser")
html_block = ""
for item in soup.select(css_filter, separator=""):
r = soup.select(css_filter, separator="")
if len(html_content) > 0 and len(r) == 0:
raise FilterNotFoundInResponse(css_filter)
for item in r:
html_block += str(item)
return html_block + "\n"
def subtractive_css_selector(css_selector, html_content):
soup = BeautifulSoup(html_content, "html.parser")
for item in soup.select(css_selector):
item.decompose()
return str(soup)
def element_removal(selectors: List[str], html_content):
"""Joins individual filters into one css filter."""
selector = ",".join(selectors)
return subtractive_css_selector(selector, html_content)
# Return str Utf-8 of matched rules
def xpath_filter(xpath_filter, html_content):
from lxml import etree, html
tree = html.fromstring(bytes(html_content, encoding='utf-8'))
html_block = ""
r = tree.xpath(xpath_filter.strip(), namespaces={'re': 'http://exslt.org/regular-expressions'})
if len(html_content) > 0 and len(r) == 0:
raise FilterNotFoundInResponse(xpath_filter)
#@note: //title/text() wont work where <title>CDATA..
for element in r:
if type(element) == etree._ElementStringResult:
html_block += str(element) + "<br/>"
elif type(element) == etree._ElementUnicodeResult:
html_block += str(element) + "<br/>"
else:
html_block += etree.tostring(element, pretty_print=True).decode('utf-8') + "<br/>"
return html_block
# Extract/find element
def extract_element(find='title', html_content=''):
@@ -45,10 +93,13 @@ def _parse_json(json_data, jsonpath_filter):
if len(match) == 1:
s = match[0].value
if not s:
raise JSONNotFound("No Matching JSON could be found for the rule {}".format(jsonpath_filter.replace('json:', '')))
# Re #257 - Better handling where it does not exist, in the case the original 's' value was False..
if not match:
# Re 265 - Just return an empty string when filter not found
return ''
stripped_text_from_html = json.dumps(s, indent=4)
# Ticket #462 - allow the original encoding through, usually it's UTF-8 or similar
stripped_text_from_html = json.dumps(s, indent=4, ensure_ascii=False)
return stripped_text_from_html
@@ -85,6 +136,100 @@ def extract_json_as_string(content, jsonpath_filter):
break
if not stripped_text_from_html:
raise JSONNotFound("No JSON matching the rule '%s' found" % jsonpath_filter.replace('json:',''))
# Re 265 - Just return an empty string when filter not found
return ''
return stripped_text_from_html
# Mode - "content" return the content without the matches (default)
# - "line numbers" return a list of line numbers that match (int list)
#
# wordlist - list of regex's (str) or words (str)
def strip_ignore_text(content, wordlist, mode="content"):
ignore = []
ignore_regex = []
# @todo check this runs case insensitive
for k in wordlist:
# Is it a regex?
if k[0] == '/':
ignore_regex.append(k.strip(" /"))
else:
ignore.append(k)
i = 0
output = []
ignored_line_numbers = []
for line in content.splitlines():
i += 1
# Always ignore blank lines in this mode. (when this function gets called)
if len(line.strip()):
regex_matches = False
# if any of these match, skip
for regex in ignore_regex:
try:
if re.search(regex, line, re.IGNORECASE):
regex_matches = True
except Exception as e:
continue
if not regex_matches and not any(skip_text.lower() in line.lower() for skip_text in ignore):
output.append(line.encode('utf8'))
else:
ignored_line_numbers.append(i)
# Used for finding out what to highlight
if mode == "line numbers":
return ignored_line_numbers
return "\n".encode('utf8').join(output)
def html_to_text(html_content: str, render_anchor_tag_content=False) -> str:
"""Converts html string to a string with just the text. If ignoring
rendering anchor tag content is enable, anchor tag content are also
included in the text
:param html_content: string with html content
:param render_anchor_tag_content: boolean flag indicating whether to extract
hyperlinks (the anchor tag content) together with text. This refers to the
'href' inside 'a' tags.
Anchor tag content is rendered in the following manner:
'[ text ](anchor tag content)'
:return: extracted text from the HTML
"""
# if anchor tag content flag is set to True define a config for
# extracting this content
if render_anchor_tag_content:
parser_config = ParserConfig(
annotation_rules={"a": ["hyperlink"]}, display_links=True
)
# otherwise set config to None
else:
parser_config = None
# get text and annotations via inscriptis
text_content = get_text(html_content, config=parser_config)
return text_content
def workarounds_for_obfuscations(content):
"""
Some sites are using sneaky tactics to make prices and other information un-renderable by Inscriptis
This could go into its own Pip package in the future, for faster updates
"""
# HomeDepot.com style <span>$<!-- -->90<!-- -->.<!-- -->74</span>
# https://github.com/weblyzard/inscriptis/issues/45
if not content:
return content
content = re.sub('<!--\s+-->', '', content)
return content

View File

@@ -0,0 +1,130 @@
from abc import ABC, abstractmethod
import time
import validators
class Importer():
remaining_data = []
new_uuids = []
good = 0
def __init__(self):
self.new_uuids = []
self.good = 0
self.remaining_data = []
@abstractmethod
def run(self,
data,
flash,
datastore):
pass
class import_url_list(Importer):
"""
Imports a list, can be in <code>https://example.com tag1, tag2, last tag</code> format
"""
def run(self,
data,
flash,
datastore,
):
urls = data.split("\n")
good = 0
now = time.time()
if (len(urls) > 5000):
flash("Importing 5,000 of the first URLs from your list, the rest can be imported again.")
for url in urls:
url = url.strip()
if not len(url):
continue
tags = ""
# 'tags' should be a csv list after the URL
if ' ' in url:
url, tags = url.split(" ", 1)
# Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server
if len(url) and validators.url(url.replace('source:', '')) and good < 5000:
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
continue
# Worked past the 'continue' above, append it to the bad list
if self.remaining_data is None:
self.remaining_data = []
self.remaining_data.append(url)
flash("{} Imported from list in {:.2f}s, {} Skipped.".format(good, time.time() - now, len(self.remaining_data)))
class import_distill_io_json(Importer):
def run(self,
data,
flash,
datastore,
):
import json
good = 0
now = time.time()
self.new_uuids=[]
try:
data = json.loads(data.strip())
except json.decoder.JSONDecodeError:
flash("Unable to read JSON file, was it broken?", 'error')
return
if not data.get('data'):
flash("JSON structure looks invalid, was it broken?", 'error')
return
for d in data.get('data'):
d_config = json.loads(d['config'])
extras = {'title': d.get('name', None)}
if len(d['uri']) and good < 5000:
try:
# @todo we only support CSS ones at the moment
if d_config['selections'][0]['frames'][0]['excludes'][0]['type'] == 'css':
extras['subtractive_selectors'] = d_config['selections'][0]['frames'][0]['excludes'][0]['expr']
except KeyError:
pass
except IndexError:
pass
try:
extras['css_filter'] = d_config['selections'][0]['frames'][0]['includes'][0]['expr']
if d_config['selections'][0]['frames'][0]['includes'][0]['type'] == 'xpath':
extras['css_filter'] = 'xpath:' + extras['css_filter']
except KeyError:
pass
except IndexError:
pass
if d.get('tags', False):
extras['tag'] = ", ".join(d['tags'])
new_uuid = datastore.add_watch(url=d['uri'].strip(),
extras=extras,
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
flash("{} Imported from Distill.io in {:.2f}s, {} Skipped.".format(len(self.new_uuids), time.time() - now, len(self.remaining_data)))

View File

@@ -0,0 +1,54 @@
from os import getenv
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
)
_FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT = 6
class model(dict):
base_config = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'settings': {
'headers': {
'User-Agent': getenv("DEFAULT_SETTINGS_HEADERS_USERAGENT", 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'),
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate', # No support for brolti in python requests yet.
'Accept-Language': 'en-GB,en-US;q=0.9,en;'
},
'requests': {
'timeout': int(getenv("DEFAULT_SETTINGS_REQUESTS_TIMEOUT", "45")), # Default 45 seconds
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'jitter_seconds': 0,
'workers': int(getenv("DEFAULT_SETTINGS_REQUESTS_WORKERS", "10")), # Number of threads, lower is better for slow connections
'proxy': None # Preferred proxy connection
},
'application': {
'api_access_token_enabled': True,
'password': False,
'base_url' : None,
'extract_title_as_title': False,
'empty_pages_are_a_change': False,
'fetch_backend': 'html_fetcher_with_weird_memory_leak',
'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT,
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [],
'ignore_whitespace': True,
'render_anchor_tag_content': False,
'notification_urls': [], # Apprise URL list
# Custom notification content
'notification_title': default_notification_title,
'notification_body': default_notification_body,
'notification_format': default_notification_format,
'real_browser_save_screenshot': True,
'schema_version' : 0,
'webdriver_delay': None # Extra delay in seconds before extracting text
}
}
}
def __init__(self, *arg, **kw):
super(model, self).__init__(*arg, **kw)
self.update(self.base_config)

View File

@@ -0,0 +1,185 @@
import os
import uuid as uuid_builder
from distutils.util import strtobool
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
)
class model(dict):
__newest_history_key = None
__history_n=0
__base_config = {
'url': None,
'tag': None,
'last_checked': 0,
'last_changed': 0,
'paused': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
#'newest_history_key': 0,
'title': None,
'previous_md5': False,
'uuid': str(uuid_builder.uuid4()),
'headers': {}, # Extra headers to send
'body': None,
'method': 'GET',
#'history': {}, # Dict of timestamp and output stripped filename
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
# Custom notification content
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'notification_title': default_notification_title,
'notification_body': default_notification_body,
'notification_format': default_notification_format,
'css_filter': '',
'extract_text': [], # Extract text by regex after filters
'subtractive_selectors': [],
'trigger_text': [], # List of text or regex to wait for until a change is detected
'text_should_not_be_present': [], # Text that should not present
'fetch_backend': None,
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
'consecutive_filter_failures': 0, # Every time the CSS/xPath filter cannot be located, reset when all is fine.
'extract_title_as_title': False,
'check_unique_lines': False, # On change-detected, compare against all history if its something new
'proxy': None, # Preferred proxy connection
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'webdriver_delay': None,
'webdriver_js_execute_code': None, # Run before change-detection
}
jitter_seconds = 0
def __init__(self, *arg, **kw):
import uuid
self.update(self.__base_config)
self.__datastore_path = kw['datastore_path']
self['uuid'] = str(uuid.uuid4())
del kw['datastore_path']
if kw.get('default'):
self.update(kw['default'])
del kw['default']
# goes at the end so we update the default object with the initialiser
super(model, self).__init__(*arg, **kw)
@property
def viewed(self):
if int(self['last_viewed']) >= int(self.newest_history_key) :
return True
return False
@property
def history_n(self):
return self.__history_n
@property
def history(self):
tmp_history = {}
import logging
import time
# Read the history file as a dict
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
if os.path.isfile(fname):
logging.debug("Reading history index " + str(time.time()))
with open(fname, "r") as f:
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
if len(tmp_history):
self.__newest_history_key = list(tmp_history.keys())[-1]
self.__history_n = len(tmp_history)
return tmp_history
@property
def has_history(self):
fname = os.path.join(self.__datastore_path, self.get('uuid'), "history.txt")
return os.path.isfile(fname)
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
@property
def newest_history_key(self):
if self.__newest_history_key is not None:
return self.__newest_history_key
if len(self.history) <= 1:
return 0
bump = self.history
return self.__newest_history_key
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, contents, timestamp):
import uuid
from os import mkdir, path, unlink
import logging
output_path = "{}/{}".format(self.__datastore_path, self['uuid'])
# Incase the operator deleted it, check and create.
if not os.path.isdir(output_path):
mkdir(output_path)
snapshot_fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
logging.debug("Saving history text {}".format(snapshot_fname))
with open(snapshot_fname, 'wb') as f:
f.write(contents)
f.close()
# Append to index
# @todo check last char was \n
index_fname = "{}/history.txt".format(output_path)
with open(index_fname, 'a') as f:
f.write("{},{}\n".format(timestamp, snapshot_fname))
f.close()
self.__newest_history_key = timestamp
self.__history_n+=1
#@todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
return snapshot_fname
@property
def has_empty_checktime(self):
# using all() + dictionary comprehension
# Check if all values are 0 in dictionary
res = all(x == None or x == False or x==0 for x in self.get('time_between_check', {}).values())
return res
def threshold_seconds(self):
seconds = 0
for m, n in mtable.items():
x = self.get('time_between_check', {}).get(m, None)
if x:
seconds += x * n
return seconds
# Iterate over all history texts and see if something new exists
def lines_contain_something_unique_compared_to_history(self, lines=[]):
local_lines = set([l.decode('utf-8').strip().lower() for l in lines])
# Compare each lines (set) against each history text file (set) looking for something new..
existing_history = set({})
for k, v in self.history.items():
alist = set([line.decode('utf-8').strip().lower() for line in open(v, 'rb')])
existing_history = existing_history.union(alist)
# Check that everything in local_lines(new stuff) already exists in existing_history - it should
# if not, something new happened
return not local_lines.issubset(existing_history)

View File

View File

@@ -1,5 +1,5 @@
import os
import apprise
from apprise import NotifyFormat
valid_tokens = {
'base_url': '',
@@ -7,26 +7,32 @@ valid_tokens = {
'watch_uuid': '',
'watch_title': '',
'watch_tag': '',
'diff': '',
'diff_full': '',
'diff_url': '',
'preview_url': '',
'current_snapshot': ''
}
valid_notification_formats = {
'Text': NotifyFormat.TEXT,
'Markdown': NotifyFormat.MARKDOWN,
'HTML': NotifyFormat.HTML,
}
default_notification_format = 'Text'
default_notification_body = '{watch_url} had a change.\n---\n{diff}\n---\n'
default_notification_title = 'ChangeDetection.io Notification - {watch_url}'
def process_notification(n_object, datastore):
import logging
log = logging.getLogger('apprise')
log.setLevel('TRACE')
apobj = apprise.Apprise(debug=True)
for url in n_object['notification_urls']:
url = url.strip()
print (">> Process Notification: AppRise notifying {}".format(url))
apobj.add(url)
# Get the notification body from datastore
n_body = n_object['notification_body']
n_title = n_object['notification_title']
n_body = n_object.get('notification_body', default_notification_body)
n_title = n_object.get('notification_title', default_notification_title)
n_format = valid_notification_formats.get(
n_object['notification_format'],
valid_notification_formats[default_notification_format],
)
# Insert variables into the notification content
notification_parameters = create_notification_parameters(n_object, datastore)
@@ -37,10 +43,81 @@ def process_notification(n_object, datastore):
n_title = n_title.replace(token, val)
n_body = n_body.replace(token, val)
apobj.notify(
body=n_body,
title=n_title
)
# https://github.com/caronc/apprise/wiki/Development_LogCapture
# Anything higher than or equal to WARNING (which covers things like Connection errors)
# raise it as an exception
apobjs=[]
sent_objs=[]
from .apprise_asset import asset
for url in n_object['notification_urls']:
apobj = apprise.Apprise(debug=True, asset=asset)
url = url.strip()
if len(url):
print(">> Process Notification: AppRise notifying {}".format(url))
with apprise.LogCapture(level=apprise.logging.DEBUG) as logs:
# Re 323 - Limit discord length to their 2000 char limit total or it wont send.
# Because different notifications may require different pre-processing, run each sequentially :(
# 2000 bytes minus -
# 200 bytes for the overhead of the _entire_ json payload, 200 bytes for {tts, wait, content} etc headers
# Length of URL - Incase they specify a longer custom avatar_url
# So if no avatar_url is specified, add one so it can be correctly calculated into the total payload
k = '?' if not '?' in url else '&'
if not 'avatar_url' in url and not url.startswith('mail'):
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
if url.startswith('tgram://'):
# Telegram only supports a limit subset of HTML, remove the '<br/>' we place in.
# re https://github.com/dgtlmoon/changedetection.io/issues/555
# @todo re-use an existing library we have already imported to strip all non-allowed tags
n_body = n_body.replace('<br/>', '\n')
n_body = n_body.replace('</br>', '\n')
# real limit is 4096, but minus some for extra metadata
payload_max_size = 3600
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('discord://') or url.startswith('https://discordapp.com/api/webhooks') or url.startswith('https://discord.com/api'):
# real limit is 2000, but minus some for extra metadata
payload_max_size = 1700
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('mailto'):
# Apprise will default to HTML, so we need to override it
# So that whats' generated in n_body is in line with what is going to be sent.
# https://github.com/caronc/apprise/issues/633#issuecomment-1191449321
if not 'format=' in url and (n_format == 'text' or n_format == 'markdown'):
prefix = '?' if not '?' in url else '&'
url = "{}{}format={}".format(url, prefix, n_format)
apobj.add(url)
apobj.notify(
title=n_title,
body=n_body,
body_format=n_format)
apobj.clear()
# Incase it needs to exist in memory for a while after to process(?)
apobjs.append(apobj)
# Returns empty string if nothing found, multi-line string otherwise
log_value = logs.getvalue()
if log_value and 'WARNING' in log_value or 'ERROR' in log_value:
raise Exception(log_value)
sent_objs.append({'title': n_title,
'body': n_body,
'url' : url,
'body_format': n_format})
# Return what was sent for better logging - after the for loop
return sent_objs
# Notification title + body content parameters get created here.
def create_notification_parameters(n_object, datastore):
@@ -57,7 +134,8 @@ def create_notification_parameters(n_object, datastore):
watch_tag = ''
# Create URLs to customise the notification with
base_url = os.getenv('BASE_URL', '').strip('"')
base_url = datastore.data['settings']['application']['base_url']
watch_url = n_object['watch_url']
# Re #148 - Some people have just {base_url} in the body or title, but this may break some notification services
@@ -73,15 +151,17 @@ def create_notification_parameters(n_object, datastore):
# Valid_tokens also used as a field validator
tokens.update(
{
'base_url': base_url if base_url is not None else '',
'watch_url': watch_url,
'watch_uuid': uuid,
'watch_title': watch_title if watch_title is not None else '',
'watch_tag': watch_tag if watch_tag is not None else '',
'diff_url': diff_url,
'preview_url': preview_url,
'current_snapshot': n_object['current_snapshot'] if 'current_snapshot' in n_object else ''
})
{
'base_url': base_url if base_url is not None else '',
'watch_url': watch_url,
'watch_uuid': uuid,
'watch_title': watch_title if watch_title is not None else '',
'watch_tag': watch_tag if watch_tag is not None else '',
'diff_url': diff_url,
'diff': n_object.get('diff', ''), # Null default in the case we use a test
'diff_full': n_object.get('diff_full', ''), # Null default in the case we use a test
'preview_url': preview_url,
'current_snapshot': n_object['current_snapshot'] if 'current_snapshot' in n_object else ''
})
return tokens
return tokens

View File

@@ -9,11 +9,39 @@
# exit when any command fails
set -e
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://foobar.com"
find tests/test_*py -type f|while read test_name
do
echo "TEST RUNNING $test_name"
pytest $test_name
done
echo "RUNNING WITH BASE_URL SET"
# Now re-run some tests with BASE_URL enabled
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py
# Now for the selenium and playwright/browserless fetchers
# Note - this is not UI functional tests - just checking that each one can fetch the content
echo "TESTING WEBDRIVER FETCH > SELENIUM/WEBDRIVER..."
docker run -d --name $$-test_selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
# takes a while to spin up
sleep 5
export WEBDRIVER_URL=http://localhost:4444/wd/hub
pytest tests/fetchers/test_content.py
unset WEBDRIVER_URL
docker kill $$-test_selenium
echo "TESTING WEBDRIVER FETCH > PLAYWRIGHT/BROWSERLESS..."
# Not all platforms support playwright (not ARM/rPI), so it's not packaged in requirements.txt
pip3 install playwright~=1.22
docker run -d --name $$-test_browserless -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
# takes a while to spin up
sleep 5
export PLAYWRIGHT_DRIVER_URL=ws://127.0.0.1:3000
pytest tests/fetchers/test_content.py
unset PLAYWRIGHT_DRIVER_URL
docker kill $$-test_browserless

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1,40 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
version="1.1"
id="Layer_1"
x="0px"
y="0px"
viewBox="0 0 115.77 122.88"
style="enable-background:new 0 0 115.77 122.88"
xml:space="preserve"
sodipodi:docname="copy.svg"
inkscape:version="1.1.1 (1:1.1+202109281949+c3084ef5ed)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs11" /><sodipodi:namedview
id="namedview9"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
inkscape:zoom="5.5501303"
inkscape:cx="57.83648"
inkscape:cy="61.439999"
inkscape:window-width="1920"
inkscape:window-height="1056"
inkscape:window-x="1920"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="g6" /><style
type="text/css"
id="style2">.st0{fill-rule:evenodd;clip-rule:evenodd;}</style><g
id="g6"><path
class="st0"
d="M89.62,13.96v7.73h12.19h0.01v0.02c3.85,0.01,7.34,1.57,9.86,4.1c2.5,2.51,4.06,5.98,4.07,9.82h0.02v0.02 v73.27v0.01h-0.02c-0.01,3.84-1.57,7.33-4.1,9.86c-2.51,2.5-5.98,4.06-9.82,4.07v0.02h-0.02h-61.7H40.1v-0.02 c-3.84-0.01-7.34-1.57-9.86-4.1c-2.5-2.51-4.06-5.98-4.07-9.82h-0.02v-0.02V92.51H13.96h-0.01v-0.02c-3.84-0.01-7.34-1.57-9.86-4.1 c-2.5-2.51-4.06-5.98-4.07-9.82H0v-0.02V13.96v-0.01h0.02c0.01-3.85,1.58-7.34,4.1-9.86c2.51-2.5,5.98-4.06,9.82-4.07V0h0.02h61.7 h0.01v0.02c3.85,0.01,7.34,1.57,9.86,4.1c2.5,2.51,4.06,5.98,4.07,9.82h0.02V13.96L89.62,13.96z M79.04,21.69v-7.73v-0.02h0.02 c0-0.91-0.39-1.75-1.01-2.37c-0.61-0.61-1.46-1-2.37-1v0.02h-0.01h-61.7h-0.02v-0.02c-0.91,0-1.75,0.39-2.37,1.01 c-0.61,0.61-1,1.46-1,2.37h0.02v0.01v64.59v0.02h-0.02c0,0.91,0.39,1.75,1.01,2.37c0.61,0.61,1.46,1,2.37,1v-0.02h0.01h12.19V35.65 v-0.01h0.02c0.01-3.85,1.58-7.34,4.1-9.86c2.51-2.5,5.98-4.06,9.82-4.07v-0.02h0.02H79.04L79.04,21.69z M105.18,108.92V35.65v-0.02 h0.02c0-0.91-0.39-1.75-1.01-2.37c-0.61-0.61-1.46-1-2.37-1v0.02h-0.01h-61.7h-0.02v-0.02c-0.91,0-1.75,0.39-2.37,1.01 c-0.61,0.61-1,1.46-1,2.37h0.02v0.01v73.27v0.02h-0.02c0,0.91,0.39,1.75,1.01,2.37c0.61,0.61,1.46,1,2.37,1v-0.02h0.01h61.7h0.02 v0.02c0.91,0,1.75-0.39,2.37-1.01c0.61-0.61,1-1.46,1-2.37h-0.02V108.92L105.18,108.92z"
id="path4"
style="fill:#ffffff;fill-opacity:1" /></g></svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

View File

@@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="18"
height="19.92"
viewBox="0 0 18 19.92"
version="1.1"
id="svg6"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs10" />
<path
d="M -3,-2 H 21 V 22 H -3 Z"
fill="none"
id="path2" />
<path
d="m 15,14.08 c -0.76,0 -1.44,0.3 -1.96,0.77 L 5.91,10.7 C 5.96,10.47 6,10.24 6,10 6,9.76 5.96,9.53 5.91,9.3 L 12.96,5.19 C 13.5,5.69 14.21,6 15,6 16.66,6 18,4.66 18,3 18,1.34 16.66,0 15,0 c -1.66,0 -3,1.34 -3,3 0,0.24 0.04,0.47 0.09,0.7 L 5.04,7.81 C 4.5,7.31 3.79,7 3,7 1.34,7 0,8.34 0,10 c 0,1.66 1.34,3 3,3 0.79,0 1.5,-0.31 2.04,-0.81 l 7.12,4.16 c -0.05,0.21 -0.08,0.43 -0.08,0.65 0,1.61 1.31,2.92 2.92,2.92 1.61,0 2.92,-1.31 2.92,-2.92 0,-1.61 -1.31,-2.92 -2.92,-2.92 z"
id="path4"
style="fill:#ffffff;fill-opacity:1" />
</svg>

After

Width:  |  Height:  |  Size: 892 B

View File

@@ -0,0 +1,46 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="18"
height="19.92"
viewBox="0 0 18 19.92"
version="1.1"
id="svg6"
sodipodi:docname="spread.svg"
inkscape:version="1.1.1 (1:1.1+202109281949+c3084ef5ed)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs10" />
<sodipodi:namedview
id="namedview8"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
fit-margin-top="0"
fit-margin-left="0"
fit-margin-right="0"
fit-margin-bottom="0"
inkscape:zoom="28.416667"
inkscape:cx="9.0087975"
inkscape:cy="9.9941348"
inkscape:window-width="1920"
inkscape:window-height="1056"
inkscape:window-x="1920"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="svg6" />
<path
d="M -3,-2 H 21 V 22 H -3 Z"
fill="none"
id="path2" />
<path
d="m 15,14.08 c -0.76,0 -1.44,0.3 -1.96,0.77 L 5.91,10.7 C 5.96,10.47 6,10.24 6,10 6,9.76 5.96,9.53 5.91,9.3 L 12.96,5.19 C 13.5,5.69 14.21,6 15,6 16.66,6 18,4.66 18,3 18,1.34 16.66,0 15,0 c -1.66,0 -3,1.34 -3,3 0,0.24 0.04,0.47 0.09,0.7 L 5.04,7.81 C 4.5,7.31 3.79,7 3,7 1.34,7 0,8.34 0,10 c 0,1.66 1.34,3 3,3 0.79,0 1.5,-0.31 2.04,-0.81 l 7.12,4.16 c -0.05,0.21 -0.08,0.43 -0.08,0.65 0,1.61 1.31,2.92 2.92,2.92 1.61,0 2.92,-1.31 2.92,-2.92 0,-1.61 -1.31,-2.92 -2.92,-2.92 z"
id="path4"
style="fill:#0078e7;fill-opacity:1" />
</svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -0,0 +1,17 @@
$(document).ready(function () {
// Load it when the #screenshot tab is in use, so we dont give a slow experience when waiting for the text diff to load
window.addEventListener('hashchange', function (e) {
toggle(location.hash);
}, false);
toggle(location.hash);
function toggle(hash_name) {
if (hash_name === '#screenshot') {
$("img#screenshot-img").attr('src', screenshot_url);
$("#settings").hide();
} else {
$("#settings").show();
}
}
});

View File

@@ -0,0 +1,36 @@
$(document).ready(function () {
function toggle() {
if ($('input[name="application-fetch_backend"]:checked').val() != 'html_requests') {
$('#requests-override-options').hide();
$('#webdriver-override-options').show();
} else {
$('#requests-override-options').show();
$('#webdriver-override-options').hide();
}
}
$('input[name="application-fetch_backend"]').click(function (e) {
toggle();
});
toggle();
$("#api-key").hover(
function () {
$("#api-key-copy").html('copy').fadeIn();
},
function () {
$("#api-key-copy").hide();
}
).click(function (e) {
$("#api-key-copy").html('copied');
var range = document.createRange();
var n = $("#api-key")[0];
range.selectNode(n);
window.getSelection().removeAllRanges();
window.getSelection().addRange(range);
document.execCommand("copy");
window.getSelection().removeAllRanges();
});
});

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,56 @@
/**
* debounce
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait after the last call before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds after the last call before
* calling the original function.
*/
Function.prototype.debounce = function (milliseconds, context) {
var baseFunction = this,
timer = null,
wait = milliseconds;
return function () {
var self = context || this,
args = arguments;
function complete() {
baseFunction.apply(self, args);
timer = null;
}
if (timer) {
clearTimeout(timer);
}
timer = setTimeout(complete, wait);
};
};
/**
* throttle
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait between calls before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds between calls before
* calling the original function.
*/
Function.prototype.throttle = function (milliseconds, context) {
var baseFunction = this,
lastEventTimestamp = null,
limit = milliseconds;
return function () {
var self = context || this,
args = arguments,
now = Date.now();
if (!lastEventTimestamp || now - lastEventTimestamp >= limit) {
lastEventTimestamp = now;
baseFunction.apply(self, args);
}
};
};

View File

@@ -0,0 +1,59 @@
$(document).ready(function() {
$('#add-email-helper').click(function (e) {
e.preventDefault();
email = prompt("Destination email");
if(email) {
var n = $(".notification-urls");
var p=email_notification_prefix;
$(n).val( $.trim( $(n).val() )+"\n"+email_notification_prefix+email );
}
});
$('#send-test-notification').click(function (e) {
e.preventDefault();
// this can be global
var csrftoken = $('input[name=csrf_token]').val();
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken)
}
}
})
data = {
window_url : window.location.href,
notification_urls : $('.notification-urls').val(),
notification_title : $('.notification-title').val(),
notification_body : $('.notification-body').val(),
notification_format : $('.notification-format').val(),
}
for (key in data) {
if (!data[key].length) {
alert(key+" is empty, cannot send test.")
return;
}
}
$.ajax({
type: "POST",
url: notification_base_url,
data : data,
statusCode: {
400: function() {
// More than likely the CSRF token was lost when the server restarted
alert("There was a problem processing the request, please reload the page.");
}
}
}).done(function(data){
console.log(data);
alert('Sent');
}).fail(function(data){
console.log(data);
alert('There was an error communicating with the server.');
})
});
});

View File

@@ -1,13 +0,0 @@
window.addEventListener("load", (event) => {
// just an example for now
function toggleVisible(elem) {
// theres better ways todo this
var x = document.getElementById(elem);
if (x.style.display === "block") {
x.style.display = "none";
} else {
x.style.display = "block";
}
}
});

View File

@@ -1,6 +1,11 @@
// Rewrite this is a plugin.. is all this JS really 'worth it?'
if(!window.location.hash) {
var tab=document.querySelectorAll("#default-tab a");
tab[0].click();
}
window.addEventListener('hashchange', function() {
var tabs = document.getElementsByClassName('active');
while (tabs[0]) {
@@ -21,7 +26,6 @@ if (!has_errors.length) {
focus_error_tab();
}
function set_active_tab() {
var tab=document.querySelectorAll("a[href='"+location.hash+"']");
if (tab.length) {

View File

@@ -0,0 +1,230 @@
// Horrible proof of concept code :)
// yes - this is really a hack, if you are a front-ender and want to help, please get in touch!
$(document).ready(function() {
var current_selected_i;
var state_clicked=false;
var c;
// greyed out fill context
var xctx;
// redline highlight context
var ctx;
var current_default_xpath;
var x_scale=1;
var y_scale=1;
var selector_image;
var selector_image_rect;
var selector_data;
$('#visualselector-tab').click(function () {
$("img#selector-background").off('load');
state_clicked = false;
current_selected_i = false;
bootstrap_visualselector();
});
$(document).on('keydown', function(event) {
if ($("img#selector-background").is(":visible")) {
if (event.key == "Escape") {
state_clicked=false;
ctx.clearRect(0, 0, c.width, c.height);
}
}
});
// For when the page loads
if(!window.location.hash || window.location.hash != '#visualselector') {
$("img#selector-background").attr('src','');
return;
}
// Handle clearing button/link
$('#clear-selector').on('click', function(event) {
if(!state_clicked) {
alert('Oops, Nothing selected!');
}
state_clicked=false;
ctx.clearRect(0, 0, c.width, c.height);
xctx.clearRect(0, 0, c.width, c.height);
$("#css_filter").val('');
});
bootstrap_visualselector();
function bootstrap_visualselector() {
if ( 1 ) {
// bootstrap it, this will trigger everything else
$("img#selector-background").bind('load', function () {
console.log("Loaded background...");
c = document.getElementById("selector-canvas");
// greyed out fill context
xctx = c.getContext("2d");
// redline highlight context
ctx = c.getContext("2d");
current_default_xpath =$("#css_filter").val();
fetch_data();
$('#selector-canvas').off("mousemove mousedown");
// screenshot_url defined in the edit.html template
}).attr("src", screenshot_url);
}
}
function fetch_data() {
// Image is ready
$('.fetching-update-notice').html("Fetching element data..");
$.ajax({
url: watch_visual_selector_data_url,
context: document.body
}).done(function (data) {
$('.fetching-update-notice').html("Rendering..");
selector_data = data;
console.log("Reported browser width from backend: "+data['browser_width']);
state_clicked=false;
set_scale();
reflow_selector();
$('.fetching-update-notice').fadeOut();
});
};
function set_scale() {
// some things to check if the scaling doesnt work
// - that the widths/sizes really are about the actual screen size cat elements.json |grep -o width......|sort|uniq
selector_image = $("img#selector-background")[0];
selector_image_rect = selector_image.getBoundingClientRect();
// make the canvas the same size as the image
$('#selector-canvas').attr('height', selector_image_rect.height);
$('#selector-canvas').attr('width', selector_image_rect.width);
$('#selector-wrapper').attr('width', selector_image_rect.width);
x_scale = selector_image_rect.width / selector_data['browser_width'];
y_scale = selector_image_rect.height / selector_image.naturalHeight;
ctx.strokeStyle = 'rgba(255,0,0, 0.9)';
ctx.fillStyle = 'rgba(255,0,0, 0.1)';
ctx.lineWidth = 3;
console.log("scaling set x: "+x_scale+" by y:"+y_scale);
$("#selector-current-xpath").css('max-width', selector_image_rect.width);
}
function reflow_selector() {
$(window).resize(function() {
set_scale();
highlight_current_selected_i();
});
var selector_currnt_xpath_text=$("#selector-current-xpath span");
set_scale();
console.log(selector_data['size_pos'].length + " selectors found");
// highlight the default one if we can find it in the xPath list
// or the xpath matches the default one
found = false;
if(current_default_xpath.length) {
for (var i = selector_data['size_pos'].length; i!==0; i--) {
var sel = selector_data['size_pos'][i-1];
if(selector_data['size_pos'][i - 1].xpath == current_default_xpath) {
console.log("highlighting "+current_default_xpath);
current_selected_i = i-1;
highlight_current_selected_i();
found = true;
break;
}
}
if(!found) {
alert("Unfortunately your existing CSS/xPath Filter was no longer found!");
}
}
$('#selector-canvas').bind('mousemove', function (e) {
if(state_clicked) {
return;
}
ctx.clearRect(0, 0, c.width, c.height);
current_selected_i=null;
// Add in offset
if ((typeof e.offsetX === "undefined" || typeof e.offsetY === "undefined") || (e.offsetX === 0 && e.offsetY === 0)) {
var targetOffset = $(e.target).offset();
e.offsetX = e.pageX - targetOffset.left;
e.offsetY = e.pageY - targetOffset.top;
}
// Reverse order - the most specific one should be deeper/"laster"
// Basically, find the most 'deepest'
var found=0;
ctx.fillStyle = 'rgba(205,0,0,0.35)';
for (var i = selector_data['size_pos'].length; i!==0; i--) {
// draw all of them? let them choose somehow?
var sel = selector_data['size_pos'][i-1];
// If we are in a bounding-box
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale
&&
e.offsetX > sel.left * y_scale && e.offsetX < sel.left * y_scale + sel.width * y_scale
) {
// FOUND ONE
set_current_selected_text(sel.xpath);
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
// no need to keep digging
// @todo or, O to go out/up, I to go in
// or double click to go up/out the selector?
current_selected_i=i-1;
found+=1;
break;
}
}
}.debounce(5));
function set_current_selected_text(s) {
selector_currnt_xpath_text[0].innerHTML=s;
}
function highlight_current_selected_i() {
if(state_clicked) {
state_clicked=false;
xctx.clearRect(0,0,c.width, c.height);
return;
}
var sel = selector_data['size_pos'][current_selected_i];
if (sel[0] == '/') {
// @todo - not sure just checking / is right
$("#css_filter").val('xpath:'+sel.xpath);
} else {
$("#css_filter").val(sel.xpath);
}
xctx.fillStyle = 'rgba(205,205,205,0.95)';
xctx.strokeStyle = 'rgba(225,0,0,0.9)';
xctx.lineWidth = 3;
xctx.fillRect(0,0,c.width, c.height);
// Clear out what only should be seen (make a clear/clean spot)
xctx.clearRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
xctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
state_clicked=true;
set_current_selected_text(sel.xpath);
}
$('#selector-canvas').bind('mousedown', function (e) {
highlight_current_selected_i();
});
}
});

View File

@@ -0,0 +1,26 @@
$(function () {
// Remove unviewed status when normally clicked
$('.diff-link').click(function () {
$(this).closest('.unviewed').removeClass('unviewed');
});
$('.with-share-link > *').click(function () {
$("#copied-clipboard").remove();
var range = document.createRange();
var n=$("#share-link")[0];
range.selectNode(n);
window.getSelection().removeAllRanges();
window.getSelection().addRange(range);
document.execCommand("copy");
window.getSelection().removeAllRanges();
$('.with-share-link').append('<span style="font-size: 80%; color: #fff;" id="copied-clipboard">Copied to clipboard</span>');
$("#copied-clipboard").fadeOut(2500, function() {
$(this).remove();
});
});
});

View File

@@ -0,0 +1,33 @@
$(document).ready(function() {
function toggle() {
if ($('input[name="fetch_backend"]:checked').val() == 'html_webdriver') {
if(playwright_enabled) {
// playwright supports headers, so hide everything else
// See #664
$('#requests-override-options #request-method').hide();
$('#requests-override-options #request-body').hide();
// @todo connect this one up
$('#ignore-status-codes-option').hide();
} else {
// selenium/webdriver doesnt support anything afaik, hide it all
$('#requests-override-options').hide();
}
$('#webdriver-override-options').show();
} else {
$('#requests-override-options').show();
$('#requests-override-options *:hidden').show();
$('#webdriver-override-options').hide();
}
}
$('input[name="fetch_backend"]').click(function (e) {
toggle();
});
toggle();
});

View File

@@ -1 +1,3 @@
node_modules
package-lock.json

View File

@@ -1,9 +1,10 @@
#diff-ui {
background: #fff;
padding: 2em;
margin: 1em;
margin-left: 1em;
margin-right: 1em;
border-radius: 5px;
font-size: 9px; }
font-size: 11px; }
#diff-ui table {
table-layout: fixed;
width: 100%; }
@@ -54,3 +55,24 @@ ins {
body {
height: 99%;
/* Hide scroll bar in Firefox */ } }
td#diff-col div {
text-align: justify;
white-space: pre-wrap; }
.ignored {
background-color: #ccc;
/* border: #0d91fa 1px solid; */
opacity: 0.7; }
.triggered {
background-color: #1b98f8; }
/* ignored and triggered? make it obvious error */
.ignored.triggered {
background-color: #ff0000; }
.tab-pane-inner#screenshot {
text-align: center; }
.tab-pane-inner#screenshot img {
max-width: 99%; }

View File

@@ -2,9 +2,10 @@
background: #fff;
padding: 2em;
margin: 1em;
margin-left: 1em;
margin-right: 1em;
border-radius: 5px;
font-size: 9px;
font-size: 11px;
table {
table-layout: fixed;
@@ -66,3 +67,30 @@ ins {
height: 99%; /* Hide scroll bar in Firefox */
}
}
td#diff-col div {
text-align: justify;
white-space: pre-wrap;
}
.ignored {
background-color: #ccc;
/* border: #0d91fa 1px solid; */
opacity: 0.7;
}
.triggered {
background-color: #1b98f8;
}
/* ignored and triggered? make it obvious error */
.ignored.triggered {
background-color: #ff0000;
}
.tab-pane-inner#screenshot {
text-align: center;
img {
max-width: 99%;
}
}

File diff suppressed because it is too large Load Diff

View File

@@ -4,13 +4,12 @@
"description": "",
"main": "index.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1",
"scss": "node-sass --watch styles.scss diff.scss -o ."
"build": "node-sass styles.scss -o .;node-sass diff.scss -o ."
},
"author": "",
"license": "ISC",
"dependencies": {
"node-sass": "^6.0.1",
"node-sass": "^7.0.0",
"tar": "^6.1.9",
"trim-newlines": "^3.0.1"
}

File diff suppressed because one or more lines are too long

View File

@@ -1,7 +1,8 @@
/*
* -- BASE STYLES --
* Most of these are inherited from Base, but I want to change a few.
* npm run scss
* nvm use v14.18.1 && npm install && npm run build
* or npm run watch
*/
body {
color: #333;
@@ -32,16 +33,21 @@ a.github-link {
section.content {
padding-top: 5em;
padding-bottom: 5em;
padding-bottom: 1em;
flex-direction: column;
display: flex;
align-items: center;
justify-content: center;
}
code {
background: #eee;
}
/* table related */
.watch-table {
width: 100%;
font-size: 80%;
tr.unviewed {
font-weight: bold;
@@ -52,7 +58,6 @@ section.content {
}
td {
font-size: 80%;
white-space: nowrap;
}
@@ -104,12 +109,12 @@ section.content {
body:after {
content: "";
background: linear-gradient(130deg, #ff7a18, #af002d 41.07%, #319197 76.05%)
background: linear-gradient(130deg, #5ad8f7, #2f50af 41.07%, #9150bf 84.05%);
}
body:after, body:before {
display: block;
height: 600px;
height: 650px;
position: absolute;
top: 0;
left: 0;
@@ -122,11 +127,8 @@ body::after {
}
body::before {
// background-image set in base.html so it works with reverse proxies etc
content: "";
background-image: url(/static/images/gradient-border.png);
}
body:before {
background-size: cover
}
@@ -233,14 +235,25 @@ body:after, body:before {
background: rgba(255, 255, 255, .5);
}
}
&.with-share-link {
> *:hover {
cursor:pointer;
}
}
}
#notification-customisation {
border: 1px solid #ccc;
padding: 1rem;
padding: 0.5rem;
border-radius: 5px;
}
#notification-error-log {
border: 1px solid #ccc;
padding: 1rem;
border-radius: 5px;
overflow-wrap: break-word;
}
#token-table {
&.pure-table td, &.pure-table th {
@@ -254,14 +267,26 @@ body:after, body:before {
border-radius: 10px;
margin-bottom: 1em;
input {
width: auto !important;
display: inline-block;
margin-bottom: 5px;
}
.label {
display: none;
}
legend {
color: #fff;
font-weight: bold;
}
#watch-add-wrapper-zone {
> div {
display: inline-block;
}
@media only screen and (max-width: 760px) {
#url {
width: 100%;
}
}
}
}
@@ -314,12 +339,10 @@ footer {
*/
}
.sticky-tab {
position: absolute;
top: 80px;
font-size: 8px;
top: 60px;
font-size: 65%;
background: #fff;
padding: 10px;
&#left-sticky {
@@ -328,6 +351,11 @@ footer {
&#right-sticky {
right: 0px;
}
&#hosted-sticky {
right: 0px;
top: 100px;
font-weight: bold;
}
}
#new-version-text a {
@@ -356,11 +384,27 @@ footer {
.pure-form {
fieldset {
padding-top: 0px;
ul {
padding-bottom: 0px;
margin-bottom: 0px;
}
}
.pure-control-group, .pure-group, .pure-controls {
padding-bottom: 1em;
div {
margin: 0px;
}
.checkbox {
> * {
display: inline;
vertical-align: middle;
}
> label {
padding-left: 5px;
}
}
}
/* The input fields with errors */
.error {
@@ -390,14 +434,16 @@ footer {
textarea {
width: 100%;
}
ul#fetch_backend {
margin: 0px;
list-style: none;
> li {
> * {
display: inline-block;
.inline-radio {
ul {
margin: 0px;
list-style: none;
li {
> * {
display: inline-block;
}
}
}
}
}
}
@@ -414,24 +460,47 @@ footer {
}
}
@media only screen and (max-width: 760px), (min-device-width: 768px) and (max-device-width: 800px) {
div.sticky-tab#hosted-sticky {
top: 60px;
left: 0px;
right: auto;
}
section.content {
padding-top: 110px;
}
// Make the tabs easier to hit, they will be all nice and horizontal
div.tabs.collapsable ul li {
display: block;
border-radius: 0px;
margin-right: 0px;
}
input[type='text'] {
width: 100%;
}
/*
Max width before this PARTICULAR table gets nasty
This query will take effect for any screen smaller than 760px
and also iPads specifically.
*/
@media only screen and (max-width: 760px), (min-device-width: 768px) and (max-device-width: 1024px) {
input[type='text'] {
width: 100%;
}
.watch-table {
/* Force table to not be like tables anymore */
thead, tbody, th, td, tr {
display: block;
}
.last-checked {
> span {
vertical-align: middle;
}
}
.last-checked::before {
color: #555;
content: "Last Checked ";
@@ -462,7 +531,7 @@ and also iPads specifically.
/* Behave like a "row" */
border: none;
border-bottom: 1px solid #eee;
vertical-align: middle;
&:before {
/* Top/left values mimic padding */
top: 6px;
@@ -539,9 +608,16 @@ $form-edge-padding: 20px;
display: block;
}
}
.edit-form {
min-width: 70%;
.tab-pane-inner {
.login-form {
.inner {
background: #fff;;
padding: $form-edge-padding;
border-radius: 5px;
}
}
.tab-pane-inner {
&:not(:target) {
display: none;
}
@@ -550,7 +626,24 @@ $form-edge-padding: 20px;
}
// doesnt need padding because theres another row of buttons/activity
padding: 0px;
}
}
#beta-logo {
height: 50px;
// looks better when it's hanging off a little
right: -3px;
top: -3px;
position: absolute;
}
#selector-header {
padding-bottom: 1em;
}
.edit-form {
min-width: 70%;
/* so it cant overflow */
max-width: 95%;
.box-wrap {
position: relative;
}
@@ -562,5 +655,108 @@ $form-edge-padding: 20px;
display: block;
background: #fff;
}
.pure-form-message-inline {
padding-left: 0;
}
}
ul {
padding-left: 1em;
padding-top: 0px;
margin-top: 4px;
}
.time-check-widget {
tr {
display: inline;
input[type="number"] {
width: 5em;
}
}
}
#selector-wrapper {
height: 600px;
overflow-y: scroll;
position: relative;
//width: 100%;
> img {
position: absolute;
z-index: 4;
max-width: 100%;
}
>canvas {
position: relative;
z-index: 5;
max-width: 100%;
&:hover {
cursor: pointer;
}
}
}
#selector-current-xpath {
font-size: 80%;
}
#webdriver-override-options {
input[type="number"] {
width: 5em;
}
}
#api-key {
&:hover {
cursor: pointer;
}
}
#api-key-copy {
color: #0078e7;
}
/* spinner */
.loader,
.loader:after {
border-radius: 50%;
width: 10px;
height: 10px;
}
.loader {
margin: 0px auto;
font-size: 3px;
vertical-align: middle;
display: inline-block;
text-indent: -9999em;
border-top: 1.1em solid rgba(38,104,237, 0.2);
border-right: 1.1em solid rgba(38,104,237, 0.2);
border-bottom: 1.1em solid rgba(38,104,237, 0.2);
border-left: 1.1em solid #2668ed;
-webkit-transform: translateZ(0);
-ms-transform: translateZ(0);
transform: translateZ(0);
-webkit-animation: load8 1.1s infinite linear;
animation: load8 1.1s infinite linear;
}
@-webkit-keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}
@keyframes load8 {
0% {
-webkit-transform: rotate(0deg);
transform: rotate(0deg);
}
100% {
-webkit-transform: rotate(360deg);
transform: rotate(360deg);
}
}

View File

@@ -1,79 +1,46 @@
from os import unlink, path, mkdir
from flask import (
flash
)
import json
import uuid as uuid_builder
from threading import Lock
from copy import deepcopy
import logging
import time
import os
import threading
import time
import uuid as uuid_builder
from copy import deepcopy
from os import mkdir, path, unlink
from threading import Lock
import re
import requests
import secrets
from . model import App, Watch
# Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods?
# Open a github issue if you know something :)
# https://stackoverflow.com/questions/6190468/how-to-trigger-function-on-value-change
class ChangeDetectionStore:
lock = Lock()
# For general updates/writes that can wait a few seconds
needs_write = False
# For when we edit, we should write to disk
needs_write_urgent = False
def __init__(self, datastore_path="/datastore", include_default_watches=True, version_tag="0.0.0"):
# Should only be active for docker
# logging.basicConfig(filename='/dev/stdout', level=logging.INFO)
self.needs_write = False
self.datastore_path = datastore_path
self.json_store_path = "{}/url-watches.json".format(self.datastore_path)
self.proxy_list = None
self.stop_thread = False
self.__data = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'settings': {
'headers': {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate', # No support for brolti in python requests yet.
'Accept-Language': 'en-GB,en-US;q=0.9,en;'
},
'requests': {
'timeout': 15, # Default 15 seconds
'minutes_between_check': 3 * 60, # Default 3 hours
'workers': 10 # Number of threads, lower is better for slow connections
},
'application': {
'password': False,
'extract_title_as_title': False,
'fetch_backend': 'html_requests',
'notification_urls': [], # Apprise URL list
# Custom notification content
'notification_title': None,
'notification_body': None,
}
}
}
self.__data = App.model()
# Base definition for all watchers
self.generic_definition = {
'url': None,
'tag': None,
'last_checked': 0,
'last_changed': 0,
'paused': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'newest_history_key': "",
'title': None,
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
'minutes_between_check': None,
'previous_md5': "",
'uuid': str(uuid_builder.uuid4()),
'headers': {}, # Extra headers to send
'history': {}, # Dict of timestamp and output stripped filename
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
# Custom notification content
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'notification_title': None,
'notification_body': None,
'css_filter': "",
'trigger_text': [], # List of text or regex to wait for until a change is detected
'fetch_backend': None,
}
# deepcopy part of #569 - not sure why its needed exactly
self.generic_definition = deepcopy(Watch.model(datastore_path = datastore_path, default={}))
if path.isfile('changedetectionio/source.txt'):
with open('changedetectionio/source.txt') as f:
@@ -104,13 +71,10 @@ class ChangeDetectionStore:
if 'application' in from_disk['settings']:
self.__data['settings']['application'].update(from_disk['settings']['application'])
# Reinitialise each `watching` with our generic_definition in the case that we add a new var in the future.
# @todo pretty sure theres a python we todo this with an abstracted(?) object!
# Convert each existing watch back to the Watch.model object
for uuid, watch in self.__data['watching'].items():
_blank = deepcopy(self.generic_definition)
_blank.update(watch)
self.__data['watching'].update({uuid: _blank})
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
watch['uuid']=uuid
self.__data['watching'][uuid] = Watch.model(datastore_path=self.datastore_path, default=watch)
print("Watching:", uuid, self.__data['watching'][uuid]['url'])
# First time ran, doesnt exist.
@@ -118,10 +82,8 @@ class ChangeDetectionStore:
if include_default_watches:
print("Creating JSON store at", self.datastore_path)
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
self.add_watch(url='https://www.gov.uk/coronavirus', tag='Covid')
self.add_watch(url='https://changedetection.io', tag='Tech news')
for i in range(50):
self.add_watch(url='https://changedetection.io/CHANGELOG.txt?x='+str(i), tag='test')
self.__data['version_tag'] = version_tag
@@ -132,41 +94,51 @@ class ChangeDetectionStore:
unlink(password_reset_lockfile)
if not 'app_guid' in self.__data:
import sys
import os
import sys
if "pytest" in sys.modules or "PYTEST_CURRENT_TEST" in os.environ:
self.__data['app_guid'] = "test-" + str(uuid_builder.uuid4())
else:
self.__data['app_guid'] = str(uuid_builder.uuid4())
# Generate the URL access token for RSS feeds
if not 'rss_access_token' in self.__data['settings']['application']:
secret = secrets.token_hex(16)
self.__data['settings']['application']['rss_access_token'] = secret
# Generate the API access token
if not 'api_access_token' in self.__data['settings']['application']:
secret = secrets.token_hex(16)
self.__data['settings']['application']['api_access_token'] = secret
# Proxy list support - available as a selection in settings when text file is imported
# CSV list
# "name, address", or just "name"
proxy_list_file = "{}/proxies.txt".format(self.datastore_path)
if path.isfile(proxy_list_file):
self.import_proxy_list(proxy_list_file)
# Bump the update version by running updates
self.run_updates()
self.needs_write = True
# Finally start the thread that will manage periodic data saves to JSON
save_data_thread = threading.Thread(target=self.save_datastore).start()
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
def get_newest_history_key(self, uuid):
if len(self.__data['watching'][uuid]['history']) == 1:
return 0
dates = list(self.__data['watching'][uuid]['history'].keys())
# Convert to int, sort and back to str again
dates = [int(i) for i in dates]
dates.sort(reverse=True)
if len(dates):
# always keyed as str
return str(dates[0])
return 0
def set_last_viewed(self, uuid, timestamp):
logging.debug("Setting watch UUID: {} last viewed to {}".format(uuid, int(timestamp)))
self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
self.needs_write = True
def remove_password(self):
self.__data['settings']['application']['password'] = False
self.needs_write = True
def update_watch(self, uuid, update_obj):
# Skip if 'paused' state
if self.__data['watching'][uuid]['paused']:
# It's possible that the watch could be deleted before update
if not self.__data['watching'].get(uuid):
return
with self.lock:
@@ -179,38 +151,47 @@ class ChangeDetectionStore:
del (update_obj[dict_key])
self.__data['watching'][uuid].update(update_obj)
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
self.needs_write = True
@property
def threshold_seconds(self):
seconds = 0
for m, n in Watch.mtable.items():
x = self.__data['settings']['requests']['time_between_check'].get(m)
if x:
seconds += x * n
return seconds
@property
def has_unviewed(self):
for uuid, watch in self.__data['watching'].items():
if watch.viewed == False:
return True
return False
@property
def data(self):
has_unviewed = False
for uuid, v in self.__data['watching'].items():
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
if int(v['newest_history_key']) <= int(v['last_viewed']):
self.__data['watching'][uuid]['viewed'] = True
else:
self.__data['watching'][uuid]['viewed'] = False
has_unviewed = True
for uuid, watch in self.__data['watching'].items():
# #106 - Be sure this is None on empty string, False, None, etc
if not self.__data['watching'][uuid]['title']:
self.__data['watching'][uuid]['title'] = None
# Default var for fetch_backend
# @todo this may not be needed anymore, or could be easily removed
if not self.__data['watching'][uuid]['fetch_backend']:
self.__data['watching'][uuid]['fetch_backend'] = self.__data['settings']['application']['fetch_backend']
self.__data['has_unviewed'] = has_unviewed
# Re #152, Return env base_url if not overriden, @todo also prefer the proxy pass url
env_base_url = os.getenv('BASE_URL','')
if not self.__data['settings']['application']['base_url']:
self.__data['settings']['application']['base_url'] = env_base_url.strip('" ')
return self.__data
def get_all_tags(self):
tags = []
for uuid, watch in self.data['watching'].items():
if watch['tag'] is None:
continue
# Support for comma separated list of tags.
for tag in watch['tag'].split(','):
tag = tag.strip()
@@ -234,37 +215,24 @@ class ChangeDetectionStore:
# GitHub #30 also delete history records
for uuid in self.data['watching']:
for path in self.data['watching'][uuid]['history'].values():
for path in self.data['watching'][uuid].history.values():
self.unlink_history_file(path)
else:
for path in self.data['watching'][uuid]['history'].values():
for path in self.data['watching'][uuid].history.values():
self.unlink_history_file(path)
del self.data['watching'][uuid]
self.needs_write = True
self.needs_write_urgent = True
# Clone a watch by UUID
def clone(self, uuid):
with self.lock:
new_uuid = str(uuid_builder.uuid4())
_clone = deepcopy(self.data['watching'][uuid])
_clone.update({'uuid': new_uuid})
attributes_to_reset = [
'last_checked',
'last_changed',
'last_viewed',
'newest_history_key',
'previous_md5',
'history'
]
for attribute in attributes_to_reset:
_clone.update({attribute: self.generic_definition[attribute]})
self.data['watching'][new_uuid] = _clone
self.needs_write = True
url = self.data['watching'][uuid]['url']
tag = self.data['watching'][uuid]['tag']
extras = self.data['watching'][uuid]
new_uuid = self.add_watch(url=url, tag=tag, extras=extras)
return new_uuid
def url_exists(self, url):
@@ -280,59 +248,81 @@ class ChangeDetectionStore:
return self.data['watching'][uuid].get(val)
# Remove a watchs data but keep the entry (URL etc)
def scrub_watch(self, uuid, limit_timestamp = False):
def clear_watch_history(self, uuid):
import pathlib
import hashlib
del_timestamps = []
self.__data['watching'][uuid].update(
{'last_checked': 0,
'last_changed': 0,
'last_viewed': 0,
'previous_md5': False,
'last_notification_error': False,
'last_error': False})
changes_removed = 0
# JSON Data, Screenshots, Textfiles (history index and snapshots), HTML in the future etc
for item in pathlib.Path(os.path.join(self.datastore_path, uuid)).rglob("*.*"):
unlink(item)
for timestamp, path in self.data['watching'][uuid]['history'].items():
if not limit_timestamp or (limit_timestamp is not False and int(timestamp) > limit_timestamp):
self.unlink_history_file(path)
del_timestamps.append(timestamp)
changes_removed += 1
# Force the attr to recalculate
bump = self.__data['watching'][uuid].history
if not limit_timestamp:
self.data['watching'][uuid]['last_checked'] = 0
self.data['watching'][uuid]['last_changed'] = 0
self.data['watching'][uuid]['previous_md5'] = 0
self.needs_write_urgent = True
def add_watch(self, url, tag="", extras=None, write_to_disk_now=True):
for timestamp in del_timestamps:
del self.data['watching'][uuid]['history'][str(timestamp)]
if extras is None:
extras = {}
# should always be str
if tag is None or not tag:
tag=''
# If there was a limitstamp, we need to reset some meta data about the entry
# This has to happen after we remove the others from the list
if limit_timestamp:
newest_key = self.get_newest_history_key(uuid)
if newest_key:
self.data['watching'][uuid]['last_checked'] = int(newest_key)
# @todo should be the original value if it was less than newest key
self.data['watching'][uuid]['last_changed'] = int(newest_key)
try:
with open(self.data['watching'][uuid]['history'][str(newest_key)], "rb") as fp:
content = fp.read()
self.data['watching'][uuid]['previous_md5'] = hashlib.md5(content).hexdigest()
except (FileNotFoundError, IOError):
self.data['watching'][uuid]['previous_md5'] = False
pass
# Incase these are copied across, assume it's a reference and deepcopy()
apply_extras = deepcopy(extras)
self.needs_write = True
return changes_removed
# Was it a share link? try to fetch the data
if (url.startswith("https://changedetection.io/share/")):
try:
r = requests.request(method="GET",
url=url,
# So we know to return the JSON instead of the human-friendly "help" page
headers={'App-Guid': self.__data['app_guid']})
res = r.json()
# List of permissible attributes we accept from the wild internet
for k in ['url', 'tag',
'paused', 'title',
'previous_md5', 'headers',
'body', 'method',
'ignore_text', 'css_filter',
'subtractive_selectors', 'trigger_text',
'extract_title_as_title', 'extract_text',
'text_should_not_be_present',
'webdriver_js_execute_code']:
if res.get(k):
apply_extras[k] = res[k]
except Exception as e:
logging.error("Error fetching metadata for shared watch link", url, str(e))
flash("Error fetching metadata for {}".format(url), 'error')
return False
def add_watch(self, url, tag):
with self.lock:
# @todo use a common generic version of this
new_uuid = str(uuid_builder.uuid4())
_blank = deepcopy(self.generic_definition)
_blank.update({
# #Re 569
new_watch = Watch.model(datastore_path=self.datastore_path, default={
'url': url,
'tag': tag,
'uuid': new_uuid
'tag': tag
})
self.data['watching'][new_uuid] = _blank
new_uuid = new_watch['uuid']
logging.debug("Added URL {} - {}".format(url, new_uuid))
for k in ['uuid', 'history', 'last_checked', 'last_changed', 'newest_history_key', 'previous_md5', 'viewed']:
if k in apply_extras:
del apply_extras[k]
new_watch.update(apply_extras)
self.__data['watching'][new_uuid]=new_watch
# Get the directory ready
output_path = "{}/{}".format(self.datastore_path, new_uuid)
@@ -341,39 +331,68 @@ class ChangeDetectionStore:
except FileExistsError:
print(output_path, "already exists.")
self.sync_to_json()
if write_to_disk_now:
self.sync_to_json()
return new_uuid
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, watch_uuid, contents):
import uuid
def get_screenshot(self, watch_uuid):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
fname = "{}/{}.stripped.txt".format(output_path, uuid.uuid4())
fname = "{}/last-screenshot.png".format(output_path)
if path.isfile(fname):
return fname
return False
def visualselector_data_is_ready(self, watch_uuid):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
screenshot_filename = "{}/last-screenshot.png".format(output_path)
elements_index_filename = "{}/elements.json".format(output_path)
if path.isfile(screenshot_filename) and path.isfile(elements_index_filename) :
return True
return False
# Save as PNG, PNG is larger but better for doing visual diff in the future
def save_screenshot(self, watch_uuid, screenshot: bytes):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
fname = "{}/last-screenshot.png".format(output_path)
with open(fname, 'wb') as f:
f.write(contents)
f.write(screenshot)
f.close()
def save_xpath_data(self, watch_uuid, data):
output_path = "{}/{}".format(self.datastore_path, watch_uuid)
fname = "{}/elements.json".format(output_path)
with open(fname, 'w') as f:
f.write(json.dumps(data))
f.close()
return fname
def sync_to_json(self):
print("Saving..")
data ={}
logging.info("Saving JSON..")
print("Saving JSON..")
try:
data = deepcopy(self.__data)
except RuntimeError:
time.sleep(0.5)
print ("! Data changed when writing to JSON, trying again..")
except RuntimeError as e:
# Try again in 15 seconds
time.sleep(15)
logging.error ("! Data changed when writing to JSON, trying again.. %s", str(e))
self.sync_to_json()
return
else:
with open(self.json_store_path, 'w') as json_file:
json.dump(data, json_file, indent=4)
logging.info("Re-saved index")
try:
# Re #286 - First write to a temp file, then confirm it looks OK and rename it
# This is a fairly basic strategy to deal with the case that the file is corrupted,
# system was out of memory, out of RAM etc
with open(self.json_store_path+".tmp", 'w') as json_file:
json.dump(data, json_file, indent=4)
os.replace(self.json_store_path+".tmp", self.json_store_path)
except Exception as e:
logging.error("Error writing JSON!! (Main JSON file save was skipped) : %s", str(e))
self.needs_write = False
self.needs_write_urgent = False
# Thread runner, this helps with thread/write issues when there are many operations that want to update the JSON
# by just running periodically in one thread, according to python, dict updates are threadsafe.
@@ -384,9 +403,15 @@ class ChangeDetectionStore:
print("Shutting down datastore thread")
return
if self.needs_write:
if self.needs_write or self.needs_write_urgent:
self.sync_to_json()
time.sleep(3)
# Once per minute is enough, more and it can cause high CPU usage
# better here is to use something like self.app.config.exit.wait(1), but we cant get to 'app' from here
for i in range(120):
time.sleep(0.5)
if self.stop_thread or self.needs_write_urgent:
break
# Go through the datastore path and remove any snapshots that are not mentioned in the index
# This usually is not used, but can be handy.
@@ -395,12 +420,108 @@ class ChangeDetectionStore:
index=[]
for uuid in self.data['watching']:
for id in self.data['watching'][uuid]['history']:
index.append(self.data['watching'][uuid]['history'][str(id)])
for id in self.data['watching'][uuid].history:
index.append(self.data['watching'][uuid].history[str(id)])
import pathlib
# Only in the sub-directories
for item in pathlib.Path(self.datastore_path).rglob("*/*txt"):
if not str(item) in index:
print ("Removing",item)
unlink(item)
for uuid in self.data['watching']:
for item in pathlib.Path(self.datastore_path).rglob(uuid+"/*.txt"):
if not str(item) in index:
print ("Removing",item)
unlink(item)
def import_proxy_list(self, filename):
import csv
with open(filename, newline='') as f:
reader = csv.reader(f, skipinitialspace=True)
# @todo This loop can could be improved
l = []
for row in reader:
if len(row):
if len(row)>=2:
l.append(tuple(row[:2]))
else:
l.append(tuple([row[0], row[0]]))
self.proxy_list = l if len(l) else None
# Run all updates
# IMPORTANT - Each update could be run even when they have a new install and the schema is correct
# So therefor - each `update_n` should be very careful about checking if it needs to actually run
# Probably we should bump the current update schema version with each tag release version?
def run_updates(self):
import inspect
import shutil
updates_available = []
for i, o in inspect.getmembers(self, predicate=inspect.ismethod):
m = re.search(r'update_(\d+)$', i)
if m:
updates_available.append(int(m.group(1)))
updates_available.sort()
for update_n in updates_available:
if update_n > self.__data['settings']['application']['schema_version']:
print ("Applying update_{}".format((update_n)))
# Wont exist on fresh installs
if os.path.exists(self.json_store_path):
shutil.copyfile(self.json_store_path, self.datastore_path+"/url-watches-before-{}.json".format(update_n))
try:
update_method = getattr(self, "update_{}".format(update_n))()
except Exception as e:
print("Error while trying update_{}".format((update_n)))
print(e)
# Don't run any more updates
return
else:
# Bump the version, important
self.__data['settings']['application']['schema_version'] = update_n
# Convert minutes to seconds on settings and each watch
def update_1(self):
if self.data['settings']['requests'].get('minutes_between_check'):
self.data['settings']['requests']['time_between_check']['minutes'] = self.data['settings']['requests']['minutes_between_check']
# Remove the default 'hours' that is set from the model
self.data['settings']['requests']['time_between_check']['hours'] = None
for uuid, watch in self.data['watching'].items():
if 'minutes_between_check' in watch:
# Only upgrade individual watch time if it was set
if watch.get('minutes_between_check', False):
self.data['watching'][uuid]['time_between_check']['minutes'] = watch['minutes_between_check']
# Move the history list to a flat text file index
# Better than SQLite because this list is only appended to, and works across NAS / NFS type setups
def update_2(self):
# @todo test running this on a newly updated one (when this already ran)
for uuid, watch in self.data['watching'].items():
history = []
if watch.get('history', False):
for d, p in watch['history'].items():
d = int(d) # Used to be keyed as str, we'll fix this now too
history.append("{},{}\n".format(d,p))
if len(history):
target_path = os.path.join(self.datastore_path, uuid)
if os.path.exists(target_path):
with open(os.path.join(target_path, "history.txt"), "w") as f:
f.writelines(history)
else:
logging.warning("Datastore history directory {} does not exist, skipping history import.".format(target_path))
# No longer needed, dynamically pulled from the disk when needed.
# But we should set it back to a empty dict so we don't break if this schema runs on an earlier version.
# In the distant future we can remove this entirely
self.data['watching'][uuid]['history'] = {}
# We incorrectly stored last_changed when there was not a change, and then confused the output list table
def update_3(self):
for uuid, watch in self.data['watching'].items():
# Be sure it's recalculated
p = watch.history
if watch.history_n < 2:
watch['last_changed'] = 0

View File

@@ -1,37 +1,46 @@
{% from '_helpers.jinja' import render_field %}
{% macro render_notifications_field(form) %}
<fieldset>
<div class="field-group">
{% macro render_common_settings_form(form, current_base_url, emailprefix) %}
<div class="pure-control-group">
{{ render_field(form.notification_urls, rows=5, placeholder="Examples:
Gitter - gitter://token/room
Office365 - o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail
AWS SNS - sns://AccessKeyID/AccessSecretKey/RegionName/+PhoneNo
SMTPS - mailtos://user:pass@mail.domain.com?to=receivingAddress@example.com")
SMTPS - mailtos://user:pass@mail.domain.com?to=receivingAddress@example.com", class="notification-urls")
}}
<div class="pure-form-message-inline">Use <a target=_new
href="https://github.com/caronc/apprise">AppRise
URLs</a> for notification to just about any service!
<div class="pure-form-message-inline">
<ul>
<li>Use <a target=_new href="https://github.com/caronc/apprise">AppRise URLs</a> for notification to just about any service! <i><a target=_new href="https://github.com/dgtlmoon/changedetection.io/wiki/Notification-configuration-notes">Please read the notification services wiki here for important configuration notes</a></i>.</li>
<li><code>discord://</code> only supports a maximum <strong>2,000 characters</strong> of notification text, including the title.</li>
<li><code>tgram://</code> bots cant send messages to other bots, so you should specify chat ID of non-bot user.</li>
<li><code>tgram://</code> only supports very limited HTML and can fail when extra tags are sent, <a href="https://core.telegram.org/bots/api#html-style">read more here</a> (or use plaintext/markdown format)</li>
</ul>
</div>
<br/>
<a id="send-test-notification" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Send test notification</a>
{% if emailprefix %}
<a id="add-email-helper" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Add email</a>
{% endif %}
<a href="{{url_for('notification_logs')}}" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Notification debug logs</a>
</div>
<div id="notification-customisation">
<div id="notification-customisation" class="pure-control-group">
<div class="pure-control-group">
{{ render_field(form.notification_title, class="m-d") }}
{{ render_field(form.notification_title, class="m-d notification-title") }}
<span class="pure-form-message-inline">Title for all notifications</span>
</div>
<div class="pure-control-group">
{{ render_field(form.notification_body , rows=5) }}
{{ render_field(form.notification_body , rows=5, class="notification-body") }}
<span class="pure-form-message-inline">Body for all notifications</span>
</div>
<div class="pure-control-group">
{{ render_field(form.notification_format , rows=5, class="notification-format") }}
<span class="pure-form-message-inline">Format for all notifications</span>
</div>
<div class="pure-controls">
<span class="pure-form-message-inline">
These tokens can be used in the notification body and title to
customise the notification text.
</span>
These tokens can be used in the notification body and title to customise the notification text.
<table class="pure-table" id="token-table">
<thead>
<tr>
@@ -64,6 +73,14 @@
<td><code>{preview_url}</code></td>
<td>The URL of the preview page generated by changedetection.io.</td>
</tr>
<tr>
<td><code>{diff}</code></td>
<td>The diff output - differences only</td>
</tr>
<tr>
<td><code>{diff_full}</code></td>
<td>The diff output - full difference output</td>
</tr>
<tr>
<td><code>{diff_url}</code></td>
<td>The URL of the diff page generated by changedetection.io.</td>
@@ -75,16 +92,10 @@
</tr>
</tbody>
</table>
<span class="pure-form-message-inline">
<br/>
URLs generated by changedetection.io (such as <code>{diff_url}</code>) require the <code>BASE_URL</code> environment variable set.<br/>
Your <code>BASE_URL</code> var is currently "{{base_url}}"
Your <code>BASE_URL</code> var is currently "{{current_base_url}}"
</span>
</div>
</div>
<div class="pure-control-group">
{{ render_field(form.trigger_check) }}
</div>
</div>
</fieldset>
{% endmacro %}
{% endmacro %}

View File

@@ -1,3 +1,30 @@
{% macro render_field(field) %}
<div {% if field.errors %} class="error" {% endif %}>{{ field(**kwargs)|safe }}
<div {% if field.errors %} class="error" {% endif %}>{{ field.label }}</div>
{% if field.errors %}
<ul class=errors>
{% for error in field.errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
{% endif %}
</div>
{% endmacro %}
{% macro render_checkbox_field(field) %}
<div class="checkbox {% if field.errors %} error {% endif %}">
{{ field(**kwargs)|safe }} {{ field.label }}
{% if field.errors %}
<ul class=errors>
{% for error in field.errors %}
<li>{{ error }}</li>
{% endfor %}
</ul>
{% endif %}
</div>
{% endmacro %}
{% macro render_field(field) %}
<div {% if field.errors %} class="error" {% endif %}>{{ field.label }}</div>
<div {% if field.errors %} class="error" {% endif %}>{{ field(**kwargs)|safe }}
@@ -25,3 +52,6 @@
{% endmacro %}
{% macro render_button(field) %}
{{ field(**kwargs)|safe }}
{% endmacro %}

View File

@@ -5,6 +5,7 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Self hosted website change detection.">
<title>Change Detection{{extra_title}}</title>
<link rel="alternate" type="application/rss+xml" title="Changedetection.io » Feed{% if active_tag %}- {{active_tag}}{% endif %}" href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}" />
<link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='pure-min.css')}}">
<link rel="stylesheet" href="{{url_for('static_content', group='styles', filename='styles.css')}}">
{% if extra_stylesheets %}
@@ -12,7 +13,15 @@
<link rel="stylesheet" href="{{ m }}?ver=1000">
{% endfor %}
{% endif %}
<style>
body::before {
background-image: url({{url_for('static_content', group='images', filename='gradient-border.png')}});
}
</style>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
</head>
<body>
<div class="header">
@@ -35,13 +44,13 @@
{% if current_user.is_authenticated or not has_password %}
{% if not current_diff_url %}
<li class="pure-menu-item">
<a href="{{ url_for('get_backup')}}" class="pure-menu-link">BACKUP</a>
<a href="{{ url_for('settings_page')}}" class="pure-menu-link">SETTINGS</a>
</li>
<li class="pure-menu-item">
<a href="{{ url_for('import_page')}}" class="pure-menu-link">IMPORT</a>
</li>
<li class="pure-menu-item">
<a href="{{ url_for('settings_page')}}" class="pure-menu-link">SETTINGS</a>
<a href="{{ url_for('get_backup')}}" class="pure-menu-link">BACKUP</a>
</li>
{% else %}
<li class="pure-menu-item">
@@ -68,7 +77,7 @@
</ul>
</div>
</div>
{% if hosted_sticky %}<div class="sticky-tab" id="hosted-sticky"><a href="https://lemonade.changedetection.io/start?ref={{guid}}">Let us host your instance!</a></div>{% endif %}
{% if left_sticky %}<div class="sticky-tab" id="left-sticky"><a href="{{url_for('preview_page', uuid=uuid)}}">Show current snapshot</a></div> {% endif %}
{% if right_sticky %}<div class="sticky-tab" id="right-sticky">{{ right_sticky }}</div> {% endif %}
<section class="content">
@@ -85,6 +94,13 @@
</ul>
{% endif %}
{% endwith %}
{% if session['share-link'] %}
<ul class="messages with-share-link">
<li class="message">Share this link: <span id="share-link">{{ session['share-link'] }}</span> <img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='copy.svg')}}" /></li>
</ul>
{% endif %}
{% block content %}
{% endblock %}

View File

@@ -2,27 +2,23 @@
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="{{url_for('scrub_page')}}" method="POST">
<div class="box-wrap inner">
<form class="pure-form pure-form-stacked" action="{{url_for('clear_all_history')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<fieldset>
<div class="pure-control-group">
This will remove all version snapshots/data, but keep your list of URLs. <br/>
This will remove version history (snapshots) for ALL watches, but keep your list of URLs! <br/>
You may like to use the <strong>BACKUP</strong> link first.<br/>
</div>
<br/>
<div class="pure-control-group">
<label for="confirmtext">Confirmation text</label>
<input type="text" id="confirmtext" required="" name="confirmtext" value="" size="10"/>
<span class="pure-form-message-inline">Type in the word <strong>scrub</strong> to confirm that you understand!</span>
<span class="pure-form-message-inline">Type in the word <strong>clear</strong> to confirm that you understand.</span>
</div>
<br/>
<div class="pure-control-group">
<label for="confirmtext">Optional: Limit deletion of snapshots to snapshots <i>newer</i> than date/time</label>
<input type="datetime-local" id="limit_date" name="limit_date" />
<span class="pure-form-message-inline">dd/mm/yyyy hh:mm (24 hour format)</span>
</div>
<br/>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Scrub!</button>
<button type="submit" class="pure-button pure-button-primary">Clear History!</button>
</div>
<br/>
<div class="pure-control-group">
@@ -30,6 +26,7 @@
</div>
</fieldset>
</form>
</div>
</div>
{% endblock %}

View File

@@ -1,6 +1,10 @@
{% extends 'base.html' %}
{% block content %}
<script>
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
<div id="settings">
<h1>Differences</h1>
@@ -18,7 +22,7 @@
{% if versions|length >= 1 %}
<label for="diff-version">Compare newest (<span id="current-v-date"></span>) with</label>
<select id="diff-version" name="previous_version">
{% for version in versions %}
{% for version in versions|reverse %}
<option value="{{version}}" {% if version== current_previous_version %} selected="" {% endif %}>
{{version}}
</option>
@@ -35,21 +39,48 @@
<div id="diff-jump">
<a onclick="next_diff();">Jump</a>
</div>
<div id="diff-ui">
<table>
<tbody>
<tr>
<!-- just proof of concept copied straight from github.com/kpdecker/jsdiff -->
<td id="a" style="display: none;">{{previous}}</td>
<td id="b" style="display: none;">{{newest}}</td>
<td id="diff-col">
<span id="result"></span>
</td>
</tr>
</tbody>
</table>
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="tabs">
<ul>
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
<li class="tab" id="screenshot-tab"><a href="#screenshot">Screenshot</a></li>
</ul>
</div>
<div id="diff-ui">
<div class="tab-pane-inner" id="text">
<div class="tip">Pro-tip: Use <strong>show current snapshot</strong> tab to visualise what will be ignored.
</div>
<table>
<tbody>
<tr>
<!-- just proof of concept copied straight from github.com/kpdecker/jsdiff -->
<td id="a" style="display: none;">{{previous}}</td>
<td id="b" style="display: none;">{{newest}}</td>
<td id="diff-col">
<span id="result"></span>
</td>
</tr>
</tbody>
</table>
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
</div>
<div class="tab-pane-inner" id="screenshot">
<div class="tip">
For now, Differences are performed on text, not graphically, only the latest screenshot is available.
</div>
</br>
{% if is_html_webdriver %}
{% if screenshot %}
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/>
{% else %}
No screenshot available just yet! Try rechecking the page.
{% endif %}
{% else %}
<strong>Screenshot requires Playwright/WebDriver enabled</strong>
{% endif %}
</div>
</div>

View File

@@ -1,28 +1,46 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.jinja' import render_field %}
{% from '_common_fields.jinja' import render_notifications_field %}
{% from '_helpers.jinja' import render_field, render_checkbox_field, render_button %}
{% from '_common_fields.jinja' import render_common_settings_form %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script>
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
const watch_visual_selector_data_url="{{url_for('static_content', group='visual_selector_data', filename=uuid)}}";
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
const playwright_enabled={% if playwright_enabled %} true {% else %} false {% endif %};
{% if emailprefix %}
const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');
{% endif %}
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='visual-selector.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='limit.js')}}" defer></script>
<div class="edit-form monospaced-textarea">
<div class="tabs">
<div class="tabs collapsable">
<ul>
<li class="tab" id="default-tab"><a href="#general">General</a></li>
<li class="tab"><a href="#request">Request</a></li>
<li class="tab"><a id="visualselector-tab" href="#visualselector">Visual Filter Selector</a></li>
<li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li>
<li class="tab"><a href="#notifications">Notifications</a></li>
<li class="tab"><a href="#filters">Filters</a></li>
<li class="tab"><a href="#triggers">Triggers</a></li>
</ul>
</div>
<div class="box-wrap inner">
<form class="pure-form pure-form-stacked"
action="{{ url_for('edit_page', uuid=uuid, next = request.args.get('next') ) }}" method="POST">
action="{{ url_for('edit_page', uuid=uuid, next = request.args.get('next'), unpause_on_save = request.args.get('unpause_on_save')) }}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<div class="tab-pane-inner" id="general">
<fieldset>
<div class="pure-control-group">
{{ render_field(form.url, placeholder="https://...", required=true, class="m-d") }}
<span class="pure-form-message-inline">Some sites use JavaScript to create the content, for this you should <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">use the Chrome/WebDriver Fetcher</a></span>
</div>
<div class="pure-control-group">
{{ render_field(form.title, class="m-d") }}
@@ -32,8 +50,8 @@
<span class="pure-form-message-inline">Organisational tag/group name used in the main listing page</span>
</div>
<div class="pure-control-group">
{{ render_field(form.minutes_between_check) }}
{% if using_default_minutes %}
{{ render_field(form.time_between_check, class="time-check-widget") }}
{% if has_empty_checktime %}
<span class="pure-form-message-inline">Currently using the <a
href="{{ url_for('settings_page', uuid=uuid) }}">default global settings</a>, change to another value if you want to be specific.</span>
{% else %}
@@ -41,78 +59,262 @@
href="{{ url_for('settings_page', uuid=uuid) }}">default global settings</a>.</span>
{% endif %}
</div>
<fieldset class="pure-group">
{{ render_field(form.headers, rows=5, placeholder="Example
Cookie: foobar
User-Agent: wonderbra 1.0") }}
<span class="pure-form-message-inline">
Note: ONLY used by Basic fast Plaintext/HTTP Client
</span>
</fieldset>
<div class="pure-control-group">
{{ render_field(form.fetch_backend) }}
{{ render_checkbox_field(form.extract_title_as_title) }}
</div>
<div class="pure-control-group">
{{ render_checkbox_field(form.filter_failure_notification_send) }}
<span class="pure-form-message-inline">
<p>Use the <strong>Basic</strong> method (default) where your watched sites don't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server. </p>
Sends a notification when the filter can no longer be seen on the page, good for knowing when the page changed and your filter will not work anymore.
</span>
</div>
</fieldset>
</div>
<div class="tab-pane-inner" id="notifications">
<strong>Note: <i>These settings override the global settings.</i></strong>
{{ render_notifications_field(form) }}
<div class="tab-pane-inner" id="request">
<div class="pure-control-group inline-radio">
{{ render_field(form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>Use the <strong>Basic</strong> method (default) where your watched site doesn't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p>
</span>
</div>
{% if form.proxy %}
<div class="pure-control-group inline-radio">
{{ render_field(form.proxy, class="fetch-backend-proxy") }}
<span class="pure-form-message-inline">
Choose a proxy for this watch
</span>
</div>
{% endif %}
<fieldset id="webdriver-override-options">
<div class="pure-control-group">
{{ render_field(form.webdriver_delay) }}
<div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/>
This will wait <i>n</i> seconds before extracting the text.
{% if using_global_webdriver_wait %}
<br/><strong>Using the current global default settings</strong>
{% endif %}
</div>
</div>
<div class="pure-control-group">
{{ render_field(form.webdriver_js_execute_code) }}
<div class="pure-form-message-inline">
Run this code before performing change detection, handy for filling in fields and other actions <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Run-JavaScript-before-change-detection">More help and examples here</a>
</div>
</div>
</fieldset>
<fieldset class="pure-group" id="requests-override-options">
{% if not playwright_enabled %}
<div class="pure-form-message-inline">
<strong>Request override is currently only used by the <i>Basic fast Plaintext/HTTP Client</i> method.</strong>
</div>
{% endif %}
<div class="pure-control-group" id="request-method">
{{ render_field(form.method) }}
</div>
<div class="pure-control-group" id="request-headers">
{{ render_field(form.headers, rows=5, placeholder="Example
Cookie: foobar
User-Agent: wonderbra 1.0") }}
</div>
<div class="pure-control-group" id="request-body">
{{ render_field(form.body, rows=5, placeholder="Example
{
\"name\":\"John\",
\"age\":30,
\"car\":null
}") }}
</div>
<div id="ignore-status-codes-option">
{{ render_checkbox_field(form.ignore_status_codes) }}
</div>
</fieldset>
<br/>
</div>
<div class="tab-pane-inner" id="filters">
<div class="tab-pane-inner" id="notifications">
<strong>Note: <i>These settings override the global settings for this watch.</i></strong>
<fieldset>
<div class="field-group">
{{ render_common_settings_form(form, current_base_url, emailprefix) }}
</div>
</fieldset>
</div>
<div class="tab-pane-inner" id="filters-and-triggers">
<div class="pure-control-group">
{{ render_field(form.css_filter, placeholder=".class-name or #some-id, or other CSS selector rule.",
class="m-d") }}
<strong>Pro-tips:</strong><br/>
<ul>
<li>
Use the preview page to see your filters and triggers highlighted.
</li>
<li>
Some sites use JavaScript to create the content, for this you should <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">use the Chrome/WebDriver Fetcher</a>
</li>
</ul>
</div>
<fieldset>
<div class="pure-control-group">
{{ render_checkbox_field(form.check_unique_lines) }}
<span class="pure-form-message-inline">Good for websites that just move the content around, and you want to know when NEW content is added, compares new lines against all history for this watch.</span>
</div>
</fieldset>
<div class="pure-control-group">
{% set field = render_field(form.css_filter,
placeholder=".class-name or #some-id, or other CSS selector rule.",
class="m-d")
%}
{{ field }}
{% if '/text()' in field %}
<span class="pure-form-message-inline"><strong>Note!: //text() function does not work where the &lt;element&gt; contains &lt;![CDATA[]]&gt;</strong></span><br/>
{% endif %}
<span class="pure-form-message-inline">
<ul>
<li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li>
<li>JSON - Limit text to this JSON rule, using <a href="https://pypi.org/project/jsonpath-ng/">JSONPath</a>, prefix with <b>"json:"</b>, <a
<li>JSON - Limit text to this JSON rule, using <a href="https://pypi.org/project/jsonpath-ng/">JSONPath</a>, prefix with <code>"json:"</code>, use <code>json:$</code> to force re-formatting if required, <a
href="https://jsonpath.com/" target="new">test your JSONPath here</a></li>
<li>XPath - Limit text to this XPath rule, simply start with a forward-slash,
<ul>
<li>Example: <code>//*[contains(@class, 'sametext')]</code> or <code>xpath://*[contains(@class, 'sametext')]</code>, <a
href="http://xpather.com/" target="new">test your XPath here</a></li>
<li>Example: Get all titles from an RSS feed <code>//title/text()</code></li>
</ul>
</li>
</ul>
Please be sure that you thoroughly understand how to write CSS or JSONPath selector rules before filing an issue on GitHub! <a
Please be sure that you thoroughly understand how to write CSS or JSONPath, XPath selector rules before filing an issue on GitHub! <a
href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">here for more CSS selector help</a>.<br/>
</span>
</div>
</fieldset>
<div class="pure-control-group">
{{ render_field(form.subtractive_selectors, rows=5, placeholder="header
footer
nav
.stockticker") }}
<span class="pure-form-message-inline">
<ul>
<li> Remove HTML element(s) by CSS selector before text conversion. </li>
<li> Add multiple elements or CSS selectors per line to ignore multiple parts of the HTML. </li>
</ul>
</span>
</div>
<fieldset class="pure-group">
{{ render_field(form.ignore_text, rows=5, placeholder="Some text to ignore in a line
/some.regex\d{2}/ for case-INsensitive regex
") }}
<span class="pure-form-message-inline">
Each line processed separately, any line matching will be ignored.<br/>
Regular Expression support, wrap the line in forward slash <b>/regex/</b>.
<ul>
<li>Each line processed separately, any line matching will be ignored (removed before creating the checksum)</li>
<li>Regular Expression support, wrap the entire line in forward slash <code>/regex/</code></li>
<li>Changing this will affect the comparison checksum which may trigger an alert</li>
<li>Use the preview/show current tab to see ignores</li>
</ul>
</span>
</fieldset>
</div>
<div class="tab-pane-inner" id="triggers">
<fieldset>
<div class="pure-control-group">
{{ render_field(form.trigger_text, rows=5, placeholder="Some text to wait for in a line
/some.regex\d{2}/ for case-INsensitive regex
") }}</br>
<span class="pure-form-message-inline">Text to wait for before triggering a change/notification, all text and regex are tested <i>case-insensitive</i>.</span><br/>
<span class="pure-form-message-inline">Trigger text is processed from the result-text that comes out of any <a href="#filters">CSS/JSON Filters</a> for this watch</span>.<br/>
<span class="pure-form-message-inline">Each line is process separately (think of each line as "OR")</span><br/>
<span class="pure-form-message-inline">Note: Wrap in forward slash / to use regex example: <span style="font-family: monospace; background: #eee">/foo\d/</span> </span>
") }}
<span class="pure-form-message-inline">
<ul>
<li>Text to wait for before triggering a change/notification, all text and regex are tested <i>case-insensitive</i>.</li>
<li>Trigger text is processed from the result-text that comes out of any CSS/JSON Filters for this watch</li>
<li>Each line is processed separately (think of each line as "OR")</li>
<li>Note: Wrap in forward slash / to use regex example: <code>/foo\d/</code></li>
</ul>
</span>
</div>
</fieldset>
<fieldset>
<div class="pure-control-group">
{{ render_field(form.text_should_not_be_present, rows=5, placeholder="For example: Out of stock
Sold out
Not in stock
Unavailable") }}
<span class="pure-form-message-inline">
<ul>
<li>Block change-detection while this text is on the page, all text and regex are tested <i>case-insensitive</i>, good for waiting for when a product is available again</li>
<li>Block text is processed from the result-text that comes out of any CSS/JSON Filters for this watch</li>
<li>All lines here must not exist (think of each line as "OR")</li>
<li>Note: Wrap in forward slash / to use regex example: <code>/foo\d/</code></li>
</ul>
</span>
</div>
</fieldset>
<fieldset>
<div class="pure-control-group">
{{ render_field(form.extract_text, rows=5, placeholder="\d+ online") }}
<span class="pure-form-message-inline">
<ul>
<li>Extracts text in the final output (line by line) after other filters using regular expressions;
<ul>
<li>Regular expression &dash; example <code>/reports.+?2022/i</code></li>
<li>Use <code>//(?aiLmsux))</code> type flags (more <a href="https://docs.python.org/3/library/re.html#index-15">information here</a>)<br/></li>
<li>Keyword example &dash; example <code>Out of stock</code></li>
<li>Use groups to extract just that text &dash; example <code>/reports.+?(\d+)/i</code> returns a list of years only</li>
</ul>
</li>
<li>One line per regular-expression/ string match</li>
</ul>
</span>
</div>
</fieldset>
</div>
<div class="tab-pane-inner visual-selector-ui" id="visualselector">
<img id="beta-logo" src="{{url_for('static_content', group='images', filename='beta-logo.png')}}">
<strong>Pro-tip:</strong> This tool is only for limiting which elements will be included on a change-detection, not for interacting with browser directly.
<fieldset>
<div class="pure-control-group">
{% if visualselector_enabled %}
{% if visualselector_data_is_ready %}
<div id="selector-header">
<a id="clear-selector" class="pure-button button-secondary button-xsmall" style="font-size: 70%">Clear selection</a>
<i class="fetching-update-notice" style="font-size: 80%;">One moment, fetching screenshot and element information..</i>
</div>
<div id="selector-wrapper">
<!-- request the screenshot and get the element offset info ready -->
<!-- use img src ready load to know everything is ready to map out -->
<!-- @todo: maybe something interesting like a field to select 'elements that contain text... and their parents n' -->
<img id="selector-background" />
<canvas id="selector-canvas"></canvas>
</div>
<div id="selector-current-xpath" style="overflow-x: hidden"><strong>Currently:</strong>&nbsp;<span class="text">Loading...</span></div>
<span class="pure-form-message-inline">
<p><span style="font-weight: bold">Beta!</span> The Visual Selector is new and there may be minor bugs, please report pages that dont work, help us to improve this software!</p>
</span>
{% else %}
<span class="pure-form-message-inline">Screenshot and element data is not available or not yet ready.</span>
{% endif %}
{% else %}
<span class="pure-form-message-inline">
<p>Sorry, this functionality only works with Playwright/Chrome enabled watches.</p>
<p>Enable the Playwright Chrome fetcher, or alternatively try our <a href="https://lemonade.changedetection.io/start">very affordable subscription based service</a>.</p>
<p>This is because Selenium/WebDriver can not extract full page screenshots reliably.</p>
</span>
{% endif %}
</div>
</fieldset>
</div>
<div id="actions">
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
<a href="{{url_for('api_delete', uuid=uuid)}}"
{{ render_button(form.save_button) }} {{ render_button(form.save_and_preview_button) }}
<a href="{{url_for('form_delete', uuid=uuid)}}"
class="pure-button button-small button-error ">Delete</a>
<a href="{{url_for('api_clone', uuid=uuid)}}"
<a href="{{url_for('clear_watch_history', uuid=uuid)}}"
class="pure-button button-small button-error ">Clear History</a>
<a href="{{url_for('form_clone', uuid=uuid)}}"
class="pure-button button-small ">Create Copy</a>
</div>
</div>

View File

@@ -1,23 +1,86 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<div class="inner">
<form class="pure-form pure-form-aligned" action="{{url_for('import_page')}}" method="POST">
<fieldset class="pure-group">
<legend>One URL per line, URLs that do not pass validation will stay in the textarea.</legend>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="edit-form monospaced-textarea">
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
<div class="tabs collapsable">
<ul>
<li class="tab" id="default-tab"><a href="#url-list">URL List</a></li>
<li class="tab"><a href="#distill-io">Distill.io</a></li>
</ul>
</div>
<div class="box-wrap inner">
<form class="pure-form pure-form-aligned" action="{{url_for('import_page')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<div class="tab-pane-inner" id="url-list">
<fieldset class="pure-group">
<legend>
Enter one URL per line, and optionally add tags for each URL after a space, delineated by comma
(,):
<br>
<code>https://example.com tag1, tag2, last tag</code>
<br>
URLs which do not pass validation will stay in the textarea.
</legend>
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="25">{{ remaining }}</textarea>
</fieldset>
overflow-x: scroll;" rows="25">{{ import_url_list_remaining }}</textarea>
</fieldset>
</div>
<div class="tab-pane-inner" id="distill-io">
<fieldset class="pure-group">
<legend>
Copy and Paste your Distill.io watch 'export' file, this should be a JSON file.</br>
This is <i>experimental</i>, supported fields are <code>name</code>, <code>uri</code>, <code>tags</code>, <code>config:selections</code>, the rest (including <code>schedule</code>) are ignored.
<br/>
<p>
How to export? <a href="https://distill.io/docs/web-monitor/how-export-and-import-monitors/">https://distill.io/docs/web-monitor/how-export-and-import-monitors/</a><br/>
Be sure to set your default fetcher to Chrome if required.</br>
</p>
</legend>
<textarea name="distill-io" class="pure-input-1-2" style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" placeholder="Example Distill.io JSON export file
{
&quot;client&quot;: {
&quot;local&quot;: 1
},
&quot;data&quot;: [
{
&quot;name&quot;: &quot;Unraid | News&quot;,
&quot;uri&quot;: &quot;https://unraid.net/blog&quot;,
&quot;config&quot;: &quot;{\&quot;selections\&quot;:[{\&quot;frames\&quot;:[{\&quot;index\&quot;:0,\&quot;excludes\&quot;:[],\&quot;includes\&quot;:[{\&quot;type\&quot;:\&quot;xpath\&quot;,\&quot;expr\&quot;:\&quot;(//div[@id='App']/div[contains(@class,'flex')]/main[contains(@class,'relative')]/section[contains(@class,'relative')]/div[@class='container']/div[contains(@class,'flex')]/div[contains(@class,'w-full')])[1]\&quot;}]}],\&quot;dynamic\&quot;:true,\&quot;delay\&quot;:2}],\&quot;ignoreEmptyText\&quot;:true,\&quot;includeStyle\&quot;:false,\&quot;dataAttr\&quot;:\&quot;text\&quot;}&quot;,
&quot;tags&quot;: [],
&quot;content_type&quot;: 2,
&quot;state&quot;: 40,
&quot;schedule&quot;: &quot;{\&quot;type\&quot;:\&quot;INTERVAL\&quot;,\&quot;params\&quot;:{\&quot;interval\&quot;:4447}}&quot;,
&quot;ts&quot;: &quot;2022-03-27T15:51:15.667Z&quot;
}
]
}
" rows="25">{{ original_distill_json }}</textarea>
</fieldset>
</div>
<button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button>
</form>
</div>
</div>
</div>
{% endblock %}

View File

@@ -1,15 +1,15 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<div class="login-form">
<div class="inner">
<form class="pure-form pure-form-stacked" action="{{url_for('login')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<fieldset>
<div class="pure-control-group">
<label for="password">Password</label>
<input type="password" id="password" required="" name="password" value=""
size="15"/>
size="15" autofocus />
<input type="hidden" id="email" name="email" value="defaultuser@changedetection.io" />
</div>
<div class="pure-control-group">

View File

@@ -0,0 +1,19 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<div class="inner">
<h4 style="margin-top: 0px;">Notification debug log</h4>
<div id="notification-error-log">
<ul style="font-size: 80%; margin:0px; padding: 0 0 0 7px">
{% for log in logs|reverse %}
<li>{{log}}</li>
{% endfor %}
</ul>
</div>
</div>
</div>
{% endblock %}

View File

@@ -1,26 +1,52 @@
{% extends 'base.html' %}
{% block content %}
<script>
const screenshot_url="{{url_for('static_content', group='screenshot', filename=uuid)}}";
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='diff-overview.js')}}" defer></script>
<div id="settings">
<h1>Current</h1>
<h1>Current - {{watch.last_checked|format_timestamp_timeago}}</h1>
</div>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<div class="tabs">
<ul>
<li class="tab" id="default-tab"><a href="#text">Text</a></li>
<li class="tab" id="screenshot-tab"><a href="#screenshot">Screenshot</a></li>
</ul>
</div>
<div id="diff-ui">
<table>
<tbody>
<tr>
<!-- just proof of concept copied straight from github.com/kpdecker/jsdiff -->
<td id="diff-col">
<span id="result">{% for row in content %}<pre>{{row}}</pre>{% endfor %}</span>
</td>
</tr>
</tbody>
</table>
<div class="tab-pane-inner" id="text">
<span class="ignored">Grey lines are ignored</span> <span class="triggered">Blue lines are triggers</span>
<table>
<tbody>
<tr>
<td id="diff-col">
{% for row in content %}
<div class="{{row.classes}}">{{row.line}}</div>
{% endfor %}
</td>
</tr>
</tbody>
</table>
</div>
<div class="tab-pane-inner" id="screenshot">
<div class="tip">
For now, Differences are performed on text, not graphically, only the latest screenshot is available.
</div>
</br>
{% if is_html_webdriver %}
{% if screenshot %}
<img style="max-width: 80%" id="screenshot-img" alt="Current screenshot from most recent request"/>
{% else %}
No screenshot available just yet! Try rechecking the page.
{% endif %}
{% else %}
<strong>Screenshot requires Playwright/WebDriver enabled</strong>
{% endif %}
</div>
</div>
{% endblock %}

View File

@@ -1,65 +1,187 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.jinja' import render_field %}
{% from '_common_fields.jinja' import render_notifications_field %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='settings.js')}}" defer></script>
{% from '_helpers.jinja' import render_field, render_checkbox_field, render_button %}
{% from '_common_fields.jinja' import render_common_settings_form %}
<script>
const notification_base_url="{{url_for('ajax_callback_send_notification_test')}}";
{% if emailprefix %}
const email_notification_prefix=JSON.parse('{{emailprefix|tojson}}');
{% endif %}
</script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='global-settings.js')}}" defer></script>
<div class="edit-form">
<div class="tabs">
<div class="tabs collapsable">
<ul>
<li class="tab" id="default-tab"><a href="#general">General</a></li>
<li class="tab"><a href="#notifications">Notifications</a></li>
<li class="tab"><a href="#fetching">Fetching</a></li>
<li class="tab"><a href="#filters">Global Filters</a></li>
<li class="tab"><a href="#api">API</a></li>
</ul>
</div>
<div class="box-wrap inner">
<form class="pure-form pure-form-stacked settings" action="{{url_for('settings_page')}}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<div class="tab-pane-inner" id="general">
<fieldset>
<div class="pure-control-group">
{{ render_field(form.minutes_between_check) }}
{{ render_field(form.requests.form.time_between_check, class="time-check-widget") }}
<span class="pure-form-message-inline">Default time for all watches, when the watch does not have a specific time setting.</span>
</div>
<div class="pure-control-group">
{% if current_user.is_authenticated %}
<a href="{{url_for('settings_page', removepassword='yes')}}"
class="pure-button pure-button-primary">Remove password</a>
{% else %}
{{ render_field(form.password) }}
<span class="pure-form-message-inline">Password protection for your changedetection.io application.</span>
{% endif %}
{{ render_field(form.requests.form.jitter_seconds, class="jitter_seconds") }}
<span class="pure-form-message-inline">Example - 3 seconds random jitter could trigger up to 3 seconds earlier or up to 3 seconds later</span>
</div>
<div class="pure-control-group">
{{ render_field(form.extract_title_as_title) }}
{{ render_field(form.application.form.filter_failure_notification_threshold_attempts, class="filter_failure_notification_threshold_attempts") }}
<span class="pure-form-message-inline">After this many consecutive times that the CSS/xPath filter is missing, send a notification
<br/>
Set to <strong>0</strong> to disable
</span>
</div>
<div class="pure-control-group">
{% if not hide_remove_pass %}
{% if current_user.is_authenticated %}
{{ render_button(form.application.form.removepassword_button) }}
{% else %}
{{ render_field(form.application.form.password) }}
<span class="pure-form-message-inline">Password protection for your changedetection.io application.</span>
{% endif %}
{% else %}
<span class="pure-form-message-inline">Password is locked.</span>
{% endif %}
</div>
<div class="pure-control-group">
{{ render_field(form.application.form.base_url, placeholder="http://yoursite.com:5000/",
class="m-d") }}
<span class="pure-form-message-inline">
Base URL used for the {base_url} token in notifications and RSS links.<br/>Default value is the ENV var 'BASE_URL' (Currently "{{current_base_url}}"),
<a href="https://github.com/dgtlmoon/changedetection.io/wiki/Configurable-BASE_URL-setting">read more here</a>.
</span>
</div>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.extract_title_as_title) }}
<span class="pure-form-message-inline">Note: This will automatically apply to all existing watches.</span>
</div>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.real_browser_save_screenshot) }}
<span class="pure-form-message-inline">When using a Chrome browser, a screenshot from the last check will be available on the Diff page</span>
</div>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.empty_pages_are_a_change) }}
<span class="pure-form-message-inline">When a page contains HTML, but no renderable text appears (empty page), is this considered a change?</span>
</div>
{% if form.requests.proxy %}
<div class="pure-control-group inline-radio">
{{ render_field(form.requests.form.proxy, class="fetch-backend-proxy") }}
<span class="pure-form-message-inline">
Choose a default proxy for all watches
</span>
</div>
{% endif %}
</fieldset>
</div>
<div class="tab-pane-inner" id="notifications">
{{ render_notifications_field(form) }}
<fieldset>
<div class="field-group">
{{ render_common_settings_form(form.application.form, current_base_url, emailprefix) }}
</div>
</fieldset>
</div>
<div class="tab-pane-inner" id="fetching">
<div class="pure-control-group">
{{ render_field(form.fetch_backend) }}
<div class="pure-control-group inline-radio">
{{ render_field(form.application.form.fetch_backend, class="fetch-backend") }}
<span class="pure-form-message-inline">
<p>Use the <strong>Basic</strong> method (default) where your watched sites don't need Javascript to render.</p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server. </p>
<p>The <strong>Chrome/Javascript</strong> method requires a network connection to a running WebDriver+Chrome server, set by the ENV var 'WEBDRIVER_URL'. </p>
</span>
</div>
<fieldset class="pure-group" id="webdriver-override-options">
<div class="pure-form-message-inline">
<strong>If you're having trouble waiting for the page to be fully rendered (text missing etc), try increasing the 'wait' time here.</strong>
<br/>
This will wait <i>n</i> seconds before extracting the text.
</div>
<div class="pure-control-group">
{{ render_field(form.application.form.webdriver_delay) }}
</div>
</fieldset>
</div>
<div class="tab-pane-inner" id="filters">
<fieldset class="pure-group">
{{ render_checkbox_field(form.application.form.ignore_whitespace) }}
<span class="pure-form-message-inline">Ignore whitespace, tabs and new-lines/line-feeds when considering if a change was detected.<br/>
<i>Note:</i> Changing this will change the status of your existing watches, possibly trigger alerts etc.
</span>
</fieldset>
<fieldset class="pure-group">
{{ render_checkbox_field(form.application.form.render_anchor_tag_content) }}
<span class="pure-form-message-inline">Render anchor tag content, default disabled, when enabled renders links as <code>(link text)[https://somesite.com]</code>
<br/>
<i>Note:</i> Changing this could affect the content of your existing watches, possibly trigger alerts etc.
</span>
</fieldset>
<fieldset class="pure-group">
{{ render_field(form.application.form.global_subtractive_selectors, rows=5, placeholder="header
footer
nav
.stockticker") }}
<span class="pure-form-message-inline">
<ul>
<li> Remove HTML element(s) by CSS selector before text conversion. </li>
<li> Add multiple elements or CSS selectors per line to ignore multiple parts of the HTML. </li>
</ul>
</span>
</fieldset>
<fieldset class="pure-group">
{{ render_field(form.application.form.global_ignore_text, rows=5, placeholder="Some text to ignore in a line
/some.regex\d{2}/ for case-INsensitive regex
") }}
<span class="pure-form-message-inline">Note: This is applied globally in addition to the per-watch rules.</span><br/>
<span class="pure-form-message-inline">
<ul>
<li>Note: This is applied globally in addition to the per-watch rules.</li>
<li>Each line processed separately, any line matching will be ignored (removed before creating the checksum)</li>
<li>Regular Expression support, wrap the entire line in forward slash <code>/regex/</code></li>
<li>Changing this will affect the comparison checksum which may trigger an alert</li>
<li>Use the preview/show current tab to see ignores</li>
</ul>
</span>
</fieldset>
</div>
<div class="tab-pane-inner" id="api">
<p>Drive your changedetection.io via API, More about <a href="https://github.com/dgtlmoon/changedetection.io/wiki/API-Reference">API access here</a></p>
<div class="pure-control-group">
{{ render_checkbox_field(form.application.form.api_access_token_enabled) }}
<div class="pure-form-message-inline">Restrict API access limit by using <code>x-api-key</code> header</div><br/>
<div class="pure-form-message-inline"><br/>API Key <span id="api-key">{{api_key}}</span>
<span style="display:none;" id="api-key-copy" >copy</span>
</div>
</div>
</div>
<div id="actions">
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
<a href="{{url_for('index')}}" class="pure-button button-small button-cancel">Back</a>
<a href="{{url_for('scrub_page')}}" class="pure-button button-small button-cancel">Delete
History
Snapshot Data</a>
{{ render_button(form.save_button) }}
<a href="{{url_for('index')}}" class="pure-button button-small button-cancel">Back</a>
<a href="{{url_for('clear_all_history')}}" class="pure-button button-small button-cancel">Clear Snapshot History</a>
</div>
</div>
</form>
</div>

View File

@@ -1,18 +1,27 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.jinja' import render_simple_field %}
{% from '_helpers.jinja' import render_simple_field, render_field %}
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
<script type="text/javascript" src="{{url_for('static_content', group='js', filename='watch-overview.js')}}" defer></script>
<div class="box">
<form class="pure-form" action="{{ url_for('api_watch_add') }}" method="POST" id="new-watch-form">
<form class="pure-form" action="{{ url_for('form_quick_watch_add') }}" method="POST" id="new-watch-form">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}"/>
<fieldset>
<legend>Add a new change detection watch</legend>
{{ render_simple_field(form.url, placeholder="https://...", required=true) }}
{{ render_simple_field(form.tag, value=active_tag if active_tag else '', placeholder="tag") }}
<button type="submit" class="pure-button pure-button-primary">Watch</button>
<div id="watch-add-wrapper-zone">
<div>
{{ render_simple_field(form.url, placeholder="https://...", required=true) }}
{{ render_simple_field(form.tag, value=active_tag if active_tag else '', placeholder="watch group") }}
</div>
<div>
{{ render_simple_field(form.watch_submit_button, title="Watch this URL!" ) }}
{{ render_simple_field(form.edit_and_watch_submit_button, title="Edit first then Watch") }}
</div>
</div>
</fieldset>
<!-- add extra stuff, like do a http POST and send headers -->
<!-- user/pass r = requests.get('https://api.github.com/user', auth=('user', 'pass')) -->
<span style="color:#eee; font-size: 80%;"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread-white.svg')}}" /> Tip: You can also add 'shared' watches. <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Sharing-a-Watch">More info</a></a></span>
</form>
<div>
<a href="{{url_for('index')}}" class="pure-button button-tag {{'active' if not active_tag }}">All</a>
@@ -38,41 +47,48 @@
<tbody>
{% for watch in watches %}
{% for watch in watches|sort(attribute='last_changed', reverse=True) %}
<tr id="{{ watch.uuid }}"
class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }}
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}error{% endif %}
{% if watch.paused is defined and watch.paused != False %}paused{% endif %}
{% if watch.newest_history_key| int > watch.last_viewed| int %}unviewed{% endif %}">
{% if watch.newest_history_key| int > watch.last_viewed and watch.history_n>=2 %}unviewed{% endif %}
{% if watch.uuid in queued_uuids %}queued{% endif %}">
<td class="inline">{{ loop.index }}</td>
<td class="inline paused-state state-{{watch.paused}}"><a href="{{url_for('index', pause=watch.uuid, tag=active_tag)}}"><img src="{{url_for('static_content', group='images', filename='pause.svg')}}" alt="Pause" title="Pause"/></a></td>
<td class="title-col inline">{{watch.title if watch.title is not none and watch.title|length > 0 else watch.url}}
<a class="external" target="_blank" rel="noopener" href="{{ watch.url }}"></a>
{%if watch.fetch_backend == "html_webdriver" %}<img style="height: 1em; display:inline-block;" src="/static/images/Google-Chrome-icon.png" />{% endif %}
<a class="external" target="_blank" rel="noopener" href="{{ watch.url.replace('source:','') }}"></a>
<a href="{{url_for('form_share_put_watch', uuid=watch.uuid)}}"><img style="height: 1em;display:inline-block;" src="{{url_for('static_content', group='images', filename='spread.svg')}}" /></a>
{%if watch.fetch_backend == "html_webdriver" %}<img style="height: 1em; display:inline-block;" src="{{url_for('static_content', group='images', filename='Google-Chrome-icon.png')}}" />{% endif %}
{% if watch.last_error is defined and watch.last_error != False %}
<div class="fetch-error">{{ watch.last_error }}</div>
{% endif %}
{% if watch.last_notification_error is defined and watch.last_notification_error != False %}
<div class="fetch-error notification-error">{{ watch.last_notification_error }}</div>
{% endif %}
{% if not active_tag %}
<span class="watch-tag-list">{{ watch.tag}}</span>
{% endif %}
</td>
<td class="last-checked">{{watch|format_last_checked_time}}</td>
<td class="last-changed">{% if watch.history|length >= 2 and watch.last_changed %}
<td class="last-checked">{{watch|format_last_checked_time|safe}}</td>
<td class="last-changed">{% if watch.history_n >=2 and watch.last_changed >0 %}
{{watch.last_changed|format_timestamp_timeago}}
{% else %}
Not yet
{% endif %}
</td>
<td>
<a href="{{ url_for('api_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}"
class="pure-button button-small pure-button-primary">Recheck</a>
<a {% if watch.uuid in queued_uuids %}disabled="true"{% endif %} href="{{ url_for('form_watch_checknow', uuid=watch.uuid, tag=request.args.get('tag')) }}"
class="recheck pure-button button-small pure-button-primary">{% if watch.uuid in queued_uuids %}Queued{% else %}Recheck{% endif %}</a>
<a href="{{ url_for('edit_page', uuid=watch.uuid)}}" class="pure-button button-small pure-button-primary">Edit</a>
{% if watch.history|length >= 2 %}
<a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Diff</a>
{% if watch.history_n >= 2 %}
<a href="{{ url_for('diff_history_page', uuid=watch.uuid) }}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary diff-link">Diff</a>
{% else %}
{% if watch.history|length == 1 %}
{% if watch.history_n == 1 %}
<a href="{{ url_for('preview_page', uuid=watch.uuid)}}" target="{{watch.uuid}}" class="pure-button button-small pure-button-primary">Preview</a>
{% endif %}
{% endif %}
@@ -88,13 +104,13 @@
</li>
{% endif %}
<li>
<a href="{{ url_for('api_watch_checknow', tag=active_tag) }}" class="pure-button button-tag ">Recheck
<a href="{{ url_for('form_watch_checknow', tag=active_tag) }}" class="pure-button button-tag ">Recheck
all {% if active_tag%}in "{{active_tag}}"{%endif%}</a>
</li>
<li>
<a href="{{ url_for('index', tag=active_tag , rss=true)}}"><img id="feed-icon" src="{{url_for('static_content', group='images', filename='Generic_Feed-icon.svg')}}" height="15px"></a>
<a href="{{ url_for('rss', tag=active_tag , token=app_rss_token)}}"><img alt="RSS Feed" id="feed-icon" src="{{url_for('static_content', group='images', filename='Generic_Feed-icon.svg')}}" height="15"></a>
</li>
</ul>
</div>
</div>
{% endblock %}
{% endblock %}

View File

@@ -16,13 +16,14 @@ def cleanup(datastore_path):
# Unlink test output files
files = ['output.txt',
'url-watches.json',
'secret.txt',
'notification.txt',
'count.txt',
'endpoint-content.txt']
'endpoint-content.txt'
]
for file in files:
try:
os.unlink("{}/{}".format(datastore_path, file))
x = 1
except FileNotFoundError:
pass
@@ -31,20 +32,22 @@ def app(request):
"""Create application for the tests."""
datastore_path = "./test-datastore"
# So they don't delay in fetching
os.environ["MINIMUM_SECONDS_RECHECK_TIME"] = "0"
try:
os.mkdir(datastore_path)
except FileExistsError:
pass
# Enable a BASE_URL for notifications to work (so we can look for diff/ etc URLs)
os.environ["BASE_URL"] = "http://mysite.com/"
cleanup(datastore_path)
app_config = {'datastore_path': datastore_path}
cleanup(app_config['datastore_path'])
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], include_default_watches=False)
app = changedetection_app(app_config, datastore)
# Disable CSRF while running tests
app.config['WTF_CSRF_ENABLED'] = False
app.config['STOP_THREADS'] = True
def teardown():

View File

@@ -0,0 +1,2 @@
"""Tests for the app."""

View File

@@ -0,0 +1,3 @@
#!/usr/bin/python3
from .. import conftest

View File

@@ -0,0 +1,48 @@
#!/usr/bin/python3
import time
from flask import url_for
from ..util import live_server_setup
import logging
def test_fetch_webdriver_content(client, live_server):
live_server_setup(live_server)
#####################
res = client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_webdriver"},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": "https://changedetection.io/ci-test.html"},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
attempt = 0
while attempt < 20:
res = client.get(url_for("index"))
if not b'Checking now' in res.data:
break
logging.getLogger().info("Waiting for check to not say 'Checking now'..")
time.sleep(3)
attempt += 1
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
logging.getLogger().info("Looking for correct fetched HTML (text) from server")
assert b'cool it works' in res.data

View File

@@ -1,20 +1,20 @@
from flask import url_for
from . util import live_server_setup
def test_check_access_control(app, client):
# Still doesnt work, but this is closer.
with app.test_client() as c:
# Check we dont have any password protection enabled yet.
with app.test_client(use_cookies=True) as c:
# Check we don't have any password protection enabled yet.
res = c.get(url_for("settings_page"))
assert b"Remove password" not in res.data
# Enable password check.
res = c.post(
url_for("settings_page"),
data={"password": "foobar",
"minutes_between_check": 180,
'fetch_backend': "html_requests"},
data={"application-password": "foobar",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
@@ -46,63 +46,34 @@ def test_check_access_control(app, client):
assert b"BACKUP" in res.data
assert b"IMPORT" in res.data
assert b"LOG OUT" in res.data
assert b"time_between_check-minutes" in res.data
assert b"fetch_backend" in res.data
# Now remove the password so other tests function, @todo this should happen before each test automatically
res = c.get(url_for("settings_page", removepassword="yes"),
follow_redirects=True)
##################################################
# Remove password button, and check that it worked
##################################################
res = c.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-fetch_backend": "html_webdriver",
"application-removepassword_button": "Remove password"
},
follow_redirects=True,
)
assert b"Password protection removed." in res.data
res = c.get(url_for("index"))
assert b"LOG OUT" not in res.data
# There was a bug where saving the settings form would submit a blank password
def test_check_access_control_no_blank_password(app, client):
# Still doesnt work, but this is closer.
with app.test_client() as c:
# Check we dont have any password protection enabled yet.
res = c.get(url_for("settings_page"))
assert b"Remove password" not in res.data
# Enable password check.
############################################################
# Be sure a blank password doesnt setup password protection
############################################################
res = c.post(
url_for("settings_page"),
data={"password": "",
"minutes_between_check": 180,
'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Password protection enabled." not in res.data
assert b"Login" not in res.data
# There was a bug where saving the settings form would submit a blank password
def test_check_access_no_remote_access_to_remove_password(app, client):
# Still doesnt work, but this is closer.
with app.test_client() as c:
# Check we dont have any password protection enabled yet.
res = c.get(url_for("settings_page"))
assert b"Remove password" not in res.data
# Enable password check.
res = c.post(
url_for("settings_page"),
data={"password": "password", "minutes_between_check": 180,
'fetch_backend': "html_requests"},
data={"application-password": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Password protection enabled." in res.data
assert b"Login" in res.data
assert b"Password protection enabled" not in res.data
res = c.get(url_for("settings_page", removepassword="yes"),
follow_redirects=True)
assert b"Password protection removed." not in res.data
res = c.get(url_for("index"),
follow_redirects=True)
assert b"watch-table-wrapper" not in res.data

View File

@@ -0,0 +1,195 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup, extract_api_key_from_UI
import json
import uuid
def set_original_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that will change</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_modified_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>which has this one new line</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div id="changetext">Some text that changes</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def is_valid_uuid(val):
try:
uuid.UUID(str(val))
return True
except ValueError:
return False
def test_api_simple(client, live_server):
live_server_setup(live_server)
api_key = extract_api_key_from_UI(client)
# Create a watch
set_original_response()
watch_uuid = None
# Validate bad URL
test_url = url_for('test_endpoint', _external=True,
headers={'x-api-key': api_key}, )
res = client.post(
url_for("createwatch"),
data=json.dumps({"url": "h://xxxxxxxxxom"}),
headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True
)
assert res.status_code == 400
# Create new
res = client.post(
url_for("createwatch"),
data=json.dumps({"url": test_url, 'tag': "One, Two", "title": "My test URL"}),
headers={'content-type': 'application/json', 'x-api-key': api_key},
follow_redirects=True
)
s = json.loads(res.data)
assert is_valid_uuid(s['uuid'])
watch_uuid = s['uuid']
assert res.status_code == 201
time.sleep(3)
# Verify its in the list and that recheck worked
res = client.get(
url_for("createwatch"),
headers={'x-api-key': api_key}
)
assert watch_uuid in json.loads(res.data).keys()
before_recheck_info = json.loads(res.data)[watch_uuid]
assert before_recheck_info['last_checked'] != 0
#705 `last_changed` should be zero on the first check
assert before_recheck_info['last_changed'] == 0
assert before_recheck_info['title'] == 'My test URL'
set_modified_response()
# Trigger recheck of all ?recheck_all=1
client.get(
url_for("createwatch", recheck_all='1'),
headers={'x-api-key': api_key},
)
time.sleep(3)
# Did the recheck fire?
res = client.get(
url_for("createwatch"),
headers={'x-api-key': api_key},
)
after_recheck_info = json.loads(res.data)[watch_uuid]
assert after_recheck_info['last_checked'] != before_recheck_info['last_checked']
assert after_recheck_info['last_changed'] != 0
# Check history index list
res = client.get(
url_for("watchhistory", uuid=watch_uuid),
headers={'x-api-key': api_key},
)
history = json.loads(res.data)
assert len(history) == 2, "Should have two history entries (the original and the changed)"
# Fetch a snapshot by timestamp, check the right one was found
res = client.get(
url_for("watchsinglehistory", uuid=watch_uuid, timestamp=list(history.keys())[-1]),
headers={'x-api-key': api_key},
)
assert b'which has this one new line' in res.data
# Fetch a snapshot by 'latest'', check the right one was found
res = client.get(
url_for("watchsinglehistory", uuid=watch_uuid, timestamp='latest'),
headers={'x-api-key': api_key},
)
assert b'which has this one new line' in res.data
# Fetch the whole watch
res = client.get(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key}
)
watch = json.loads(res.data)
# @todo how to handle None/default global values?
assert watch['history_n'] == 2, "Found replacement history section, which is in its own API"
# Finally delete the watch
res = client.delete(
url_for("watch", uuid=watch_uuid),
headers={'x-api-key': api_key},
)
assert res.status_code == 204
# Check via a relist
res = client.get(
url_for("createwatch"),
headers={'x-api-key': api_key}
)
watch_list = json.loads(res.data)
assert len(watch_list) == 0, "Watch list should be empty"
def test_access_denied(client, live_server):
# `config_api_token_enabled` Should be On by default
res = client.get(
url_for("createwatch")
)
assert res.status_code == 403
res = client.get(
url_for("createwatch"),
headers={'x-api-key': "something horrible"}
)
assert res.status_code == 403
# Disable config_api_token_enabled and it should work
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-fetch_backend": "html_requests",
"application-api_access_token_enabled": ""
},
follow_redirects=True
)
assert b"Settings updated." in res.data
res = client.get(
url_for("createwatch")
)
assert res.status_code == 200

View File

@@ -0,0 +1,39 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
def test_basic_auth(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_basicauth_method', _external=True).replace("//","//myuser:mypass@")
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Check form validation
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": "", "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(1)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b'myuser mypass basic' in res.data

View File

@@ -3,11 +3,19 @@
import time
from flask import url_for
from urllib.request import urlopen
from . util import set_original_response, set_modified_response, live_server_setup
from .util import set_original_response, set_modified_response, live_server_setup
sleep_time_for_fetch_thread = 3
# Basic test to check inscriptus is not adding return line chars, basically works etc
def test_inscriptus():
from inscriptis import get_text
html_content = "<html><body>test!<br/>ok man</body></html>"
stripped_text_from_html = get_text(html_content)
assert stripped_text_from_html == 'test!\nok man'
def test_check_basic_change_detection_functionality(client, live_server):
set_original_response()
live_server_setup(live_server)
@@ -18,13 +26,14 @@ def test_check_basic_change_detection_functionality(client, live_server):
data={"urls": url_for('test_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Do this a few times.. ensures we dont accidently set the status
for n in range(3):
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -42,6 +51,14 @@ def test_check_basic_change_detection_functionality(client, live_server):
#####################
# Check HTML conversion detected and workd
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Check this class does not appear (that we didnt see the actual source)
assert b'foobar-detection' not in res.data
# Make a change
set_modified_response()
@@ -49,8 +66,8 @@ def test_check_basic_change_detection_functionality(client, live_server):
assert b'which has this one new line' in res.read()
# Force recheck
res = client.get(url_for("api_watch_checknow"), follow_redirects=True)
assert b'1 watches are rechecking.' in res.data
res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
assert b'1 watches are queued for rechecking.' in res.data
time.sleep(sleep_time_for_fetch_thread)
@@ -59,9 +76,14 @@ def test_check_basic_change_detection_functionality(client, live_server):
assert b'unviewed' in res.data
# #75, and it should be in the RSS feed
res = client.get(url_for("index", rss="true"))
res = client.get(url_for("rss"))
expected_url = url_for('test_endpoint', _external=True)
assert b'<rss' in res.data
# re #16 should have the diff in here too
assert b'(into ) which has this one new line' in res.data
assert b'CDATA' in res.data
assert expected_url.encode('utf-8') in res.data
# Following the 'diff' link, it should no longer display as 'unviewed' even after we recheck it a few times
@@ -72,7 +94,7 @@ def test_check_basic_change_detection_functionality(client, live_server):
# Do this a few times.. ensures we dont accidently set the status
for n in range(2):
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -80,7 +102,8 @@ def test_check_basic_change_detection_functionality(client, live_server):
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'head title' not in res.data # Should not be present because this is off by default
assert b'Mark all viewed' not in res.data
assert b'head title' not in res.data # Should not be present because this is off by default
assert b'test-endpoint' in res.data
set_original_response()
@@ -88,20 +111,28 @@ def test_check_basic_change_detection_functionality(client, live_server):
# Enable auto pickup of <title> in settings
res = client.post(
url_for("settings_page"),
data={"extract_title_as_title": "1", "minutes_between_check": 180, 'fetch_backend': "html_requests"},
data={"application-extract_title_as_title": "1", "requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
assert b'Mark all viewed' in res.data
# It should have picked up the <title>
assert b'head title' in res.data
# hit the mark all viewed link
res = client.get(url_for("mark_all_viewed"), follow_redirects=True)
assert b'Mark all viewed' not in res.data
assert b'unviewed' not in res.data
#
# Cleanup everything
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,25 @@
#!/usr/bin/python3
import time
from flask import url_for
from urllib.request import urlopen
from . util import set_original_response, set_modified_response, live_server_setup
def test_backup(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
res = client.get(
url_for("get_backup"),
follow_redirects=True
)
# Should get the right zip content type
assert res.content_type == "application/zip"
# Should be PK/ZIP stream
assert res.data.count(b'PK') >= 2

View File

@@ -0,0 +1,137 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
from changedetectionio import html_tools
def set_original_ignore_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def set_modified_original_ignore_response():
test_return_data = """<html>
<body>
Some NEW nice initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<p>new ignore stuff</p>
<p>out of stock</p>
<p>blah</p>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
# Is the same but includes ZZZZZ, 'ZZZZZ' is the last line in ignore_text
def set_modified_response_minus_block_text():
test_return_data = """<html>
<body>
Some NEW nice initial text</br>
<p>Which is across multiple lines</p>
<p>now on sale $2/p>
</br>
So let's see what happens. </br>
<p>new ignore stuff</p>
<p>blah</p>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def test_check_block_changedetection_text_NOT_present(client, live_server):
sleep_time_for_fetch_thread = 3
live_server_setup(live_server)
# Use a mix of case in ZzZ to prove it works case-insensitive.
ignore_text = "out of stoCk\r\nfoobar"
set_original_ignore_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"text_should_not_be_present": ignore_text, "url": test_url, 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Check it saved
res = client.get(
url_for("edit_page", uuid="first"),
)
assert bytes(ignore_text.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data
# The page changed, BUT the text is still there, just the rest of it changes, we should not see a change
set_modified_original_ignore_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data
# Now we set a change where the text is gone, it should now trigger
set_modified_response_minus_block_text()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,30 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
def test_trigger_functionality(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": "https://changedetection.io"},
follow_redirects=True
)
assert b"1 Imported" in res.data
res = client.get(
url_for("form_clone", uuid="first"),
follow_redirects=True
)
assert b"Cloned." in res.data

View File

@@ -89,7 +89,7 @@ def test_check_markup_css_filter_restriction(client, live_server):
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -110,7 +110,7 @@ def test_check_markup_css_filter_restriction(client, live_server):
assert bytes(css_filter.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -118,7 +118,7 @@ def test_check_markup_css_filter_restriction(client, live_server):
set_modified_response()
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)

View File

@@ -0,0 +1,167 @@
#!/usr/bin/python3
import time
from flask import url_for
from ..html_tools import *
from .util import live_server_setup
def test_setup(live_server):
live_server_setup(live_server)
def set_original_response():
test_return_data = """<html>
<header>
<h2>Header</h2>
</header>
<nav>
<ul>
<li><a href="#">A</a></li>
<li><a href="#">B</a></li>
<li><a href="#">C</a></li>
</ul>
</nav>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="changetext">Some text that will change</div>
</body>
<footer>
<p>Footer</p>
</footer>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def set_modified_response():
test_return_data = """<html>
<header>
<h2>Header changed</h2>
</header>
<nav>
<ul>
<li><a href="#">A changed</a></li>
<li><a href="#">B</a></li>
<li><a href="#">C</a></li>
</ul>
</nav>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="changetext">Some text that changes</div>
</body>
<footer>
<p>Footer changed</p>
</footer>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def test_element_removal_output():
from changedetectionio import fetch_site_status
from inscriptis import get_text
# Check text with sub-parts renders correctly
content = """<html>
<header>
<h2>Header</h2>
</header>
<nav>
<ul>
<li><a href="#">A</a></li>
</ul>
</nav>
<body>
Some initial text</br>
<p>across multiple lines</p>
<div id="changetext">Some text that changes</div>
</body>
<footer>
<p>Footer</p>
</footer>
</html>
"""
html_blob = element_removal(
["header", "footer", "nav", "#changetext"], html_content=content
)
text = get_text(html_blob)
assert (
text
== """Some initial text
across multiple lines
"""
)
def test_element_removal_full(client, live_server):
sleep_time_for_fetch_thread = 3
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for("test_endpoint", _external=True)
res = client.post(
url_for("import_page"), data={"urls": test_url}, follow_redirects=True
)
assert b"1 Imported" in res.data
# Goto the edit page, add the filter data
# Not sure why \r needs to be added - absent of the #changetext this is not necessary
subtractive_selectors_data = "header\r\nfooter\r\nnav\r\n#changetext"
res = client.post(
url_for("edit_page", uuid="first"),
data={
"subtractive_selectors": subtractive_selectors_data,
"url": test_url,
"tag": "",
"headers": "",
"fetch_backend": "html_requests",
},
follow_redirects=True,
)
assert b"Updated watch." in res.data
# Check it saved
res = client.get(
url_for("edit_page", uuid="first"),
)
assert bytes(subtractive_selectors_data.encode("utf-8")) in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# so that we set the state to 'unviewed' after all the edits
client.get(url_for("diff_history_page", uuid="first"))
# Make a change to header/footer/nav
set_modified_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# There should not be an unviewed change, as changes should be removed
res = client.get(url_for("index"))
assert b"unviewed" not in res.data

View File

@@ -0,0 +1,87 @@
#!/usr/bin/python3
# coding=utf-8
import time
from flask import url_for
from .util import live_server_setup
import pytest
def test_setup(live_server):
live_server_setup(live_server)
def set_html_response():
test_return_data = """
<html><body><span class="nav_second_img_text">
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;铸大国重器,挺制造脊梁,致力能源未来,赋能美好生活。
</span>
</body></html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
# In the case the server does not issue a charset= or doesnt have content_type header set
def test_check_encoding_detection(client, live_server):
set_html_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', content_type="text/html", _external=True)
client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(2)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Should see the proper string
assert "铸大国重".encode('utf-8') in res.data
# Should not see the failed encoding
assert b'\xc2\xa7' not in res.data
# In the case the server does not issue a charset= or doesnt have content_type header set
def test_check_encoding_detection_missing_content_type_header(client, live_server):
set_html_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(2)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Should see the proper string
assert "铸大国重".encode('utf-8') in res.data
# Should not see the failed encoding
assert b'\xc2\xa7' not in res.data

View File

@@ -0,0 +1,65 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
from ..html_tools import *
def test_setup(live_server):
live_server_setup(live_server)
def test_error_handler(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint',
status_code=403,
_external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'Status Code 403' in res.data
assert bytes("just now".encode('utf-8')) in res.data
# Just to be sure error text is properly handled
def test_error_text_handler(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": "https://errorfuldomainthatnevereallyexists12356.com"},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
res = client.get(url_for("index"))
assert b'Name or service not known' in res.data
assert bytes("just now".encode('utf-8')) in res.data

View File

@@ -0,0 +1,198 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup
from ..html_tools import *
def set_original_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div class="changetext">Some text that will change</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_modified_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>which has this one new line</p>
</br>
So let's see what happens. </br>
<div id="sametext">Some text thats the same</div>
<div class="changetext">Some text that did change ( 1000 online <br/> 80 guests<br/> 2000 online )</div>
<div class="changetext">SomeCase insensitive 3456</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_multiline_response():
test_return_data = """<html>
<body>
<p>Something <br/>
across 6 billion multiple<br/>
lines
</p>
<div>aaand something lines</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def test_setup(client, live_server):
live_server_setup(live_server)
def test_check_filter_multiline(client, live_server):
set_multiline_response()
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": '',
'extract_text': '/something.+?6 billion.+?lines/si',
"url": test_url,
"tag": "",
"headers": "",
'fetch_backend': "html_requests"
},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(3)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b'<div class="">Something' in res.data
assert b'<div class="">across 6 billion multiple' in res.data
assert b'<div class="">lines' in res.data
# but the last one, which also says 'lines' shouldnt be here (non-greedy match checking)
assert b'aaand something lines' not in res.data
def test_check_filter_and_regex_extract(client, live_server):
sleep_time_for_fetch_thread = 3
css_filter = ".changetext"
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(1)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": css_filter,
'extract_text': '\d+ online\r\n\d+ guests\r\n/somecase insensitive \d+/i\r\n/somecase insensitive (345\d)/i',
"url": test_url,
"tag": "",
"headers": "",
'fetch_backend': "html_requests"
},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Make a change
set_modified_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should have 'unviewed' still
# Because it should be looking at only that 'sametext' id
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Check HTML conversion detected and workd
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Class will be blank for now because the frontend didnt apply the diff
assert b'<div class="">1000 online' in res.data
# All regex matching should be here
assert b'<div class="">2000 online' in res.data
# Both regexs should be here
assert b'<div class="">80 guests' in res.data
# Regex with flag handling should be here
assert b'<div class="">SomeCase insensitive 3456' in res.data
# Singular group from /somecase insensitive (345\d)/i
assert b'<div class="">3456' in res.data
# Regex with multiline flag handling should be here
# Should not be here
assert b'Some text that did change' not in res.data

View File

@@ -0,0 +1,134 @@
import os
import time
import re
from flask import url_for
from .util import set_original_response, live_server_setup
from changedetectionio.model import App
def set_response_with_filter():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<div id="nope-doesnt-exist">Some text thats the same</div>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def run_filter_test(client, content_filter):
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("form_quick_watch_add"),
data={"url": test_url, "tag": ''},
follow_redirects=True
)
assert b"Watch added" in res.data
# Give the thread time to pick up the first version
time.sleep(3)
# Goto the edit page, add our ignore text
# Add our URL to the import page
url = url_for('test_notification_endpoint', _external=True)
notification_url = url.replace('http', 'json')
print(">>>> Notification URL: " + notification_url)
# Just a regular notification setting, this will be used by the special 'filter not found' notification
notification_form_data = {"notification_urls": notification_url,
"notification_title": "New ChangeDetection.io Notification - {watch_url}",
"notification_body": "BASE URL: {base_url}\n"
"Watch URL: {watch_url}\n"
"Watch UUID: {watch_uuid}\n"
"Watch title: {watch_title}\n"
"Watch tag: {watch_tag}\n"
"Preview: {preview_url}\n"
"Diff URL: {diff_url}\n"
"Snapshot: {current_snapshot}\n"
"Diff: {diff}\n"
"Diff Full: {diff_full}\n"
":-)",
"notification_format": "Text"}
notification_form_data.update({
"url": test_url,
"tag": "my tag",
"title": "my title",
"headers": "",
"css_filter": content_filter,
"fetch_backend": "html_requests"})
res = client.post(
url_for("edit_page", uuid="first"),
data=notification_form_data,
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(3)
# Now the notification should not exist, because we didnt reach the threshold
assert not os.path.isfile("test-datastore/notification.txt")
for i in range(0, App._FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT):
res = client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
# We should see something in the frontend
assert b'Did the page change its layout' in res.data
# Now it should exist and contain our "filter not found" alert
assert os.path.isfile("test-datastore/notification.txt")
notification = False
with open("test-datastore/notification.txt", 'r') as f:
notification = f.read()
assert 'CSS/xPath filter was not present in the page' in notification
assert content_filter.replace('"', '\\"') in notification
# Remove it and prove that it doesnt trigger when not expected
os.unlink("test-datastore/notification.txt")
set_response_with_filter()
for i in range(0, App._FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT):
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
# It should have sent a notification, but..
assert os.path.isfile("test-datastore/notification.txt")
# but it should not contain the info about the failed filter
with open("test-datastore/notification.txt", 'r') as f:
notification = f.read()
assert not 'CSS/xPath filter was not present in the page' in notification
# cleanup for the next
client.get(
url_for("form_delete", uuid="all"),
follow_redirects=True
)
os.unlink("test-datastore/notification.txt")
def test_setup(live_server):
live_server_setup(live_server)
def test_check_css_filter_failure_notification(client, live_server):
set_original_response()
time.sleep(1)
run_filter_test(client, '#nope-doesnt-exist')
def test_check_xpath_filter_failure_notification(client, live_server):
set_original_response()
time.sleep(1)
run_filter_test(client, '//*[@id="nope-doesnt-exist"]')

View File

@@ -1,80 +0,0 @@
import json
import time
from flask import url_for
from . util import set_original_response, set_modified_response, live_server_setup
# Hard to just add more live server URLs when one test is already running (I think)
# So we add our test here (was in a different file)
def test_headers_in_request(client, live_server):
live_server_setup(live_server)
# Add our URL to the import page
test_url = url_for('test_headers', _external=True)
# Add the test URL twice, we will check
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
cookie_header = '_ga=GA1.2.1022228332; cookie-preferences=analytics:accepted;'
# Add some headers to a request
res = client.post(
url_for("edit_page", uuid="first"),
data={
"url": test_url,
"tag": "",
"fetch_backend": "html_requests",
"headers": "xxx:ooo\ncool:yeah\r\ncookie:"+cookie_header},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Give the thread time to pick up the first version
time.sleep(5)
# The service should echo back the request headers
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
# Flask will convert the header key to uppercase
assert b"Xxx:ooo" in res.data
assert b"Cool:yeah" in res.data
# The test call service will return the headers as the body
from html import escape
assert escape(cookie_header).encode('utf-8') in res.data
time.sleep(5)
# Re #137 - Examine the JSON index file, it should have only one set of headers entered
watches_with_headers = 0
with open('test-datastore/url-watches.json') as f:
app_struct = json.load(f)
for uuid in app_struct['watching']:
if (len(app_struct['watching'][uuid]['headers'])):
watches_with_headers += 1
# Should be only one with headers set
assert watches_with_headers==1

View File

@@ -0,0 +1,84 @@
#!/usr/bin/python3
import time
import os
import json
import logging
from flask import url_for
from .util import live_server_setup
from urllib.parse import urlparse, parse_qs
def test_consistent_history(client, live_server):
live_server_setup(live_server)
# Give the endpoint time to spin up
time.sleep(1)
r = range(1, 50)
for one in r:
test_url = url_for('test_endpoint', content_type="text/html", content=str(one), _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
while True:
res = client.get(url_for("index"))
logging.debug("Waiting for 'Checking now' to go away..")
if b'Checking now' not in res.data:
break
time.sleep(0.5)
time.sleep(3)
# Essentially just triggers the DB write/update
res = client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Give it time to write it out
time.sleep(3)
json_db_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, 'url-watches.json')
json_obj = None
with open(json_db_file, 'r') as f:
json_obj = json.load(f)
# assert the right amount of watches was found in the JSON
assert len(json_obj['watching']) == len(r), "Correct number of watches was found in the JSON"
# each one should have a history.txt containing just one line
for w in json_obj['watching'].keys():
history_txt_index_file = os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, 'history.txt')
assert os.path.isfile(history_txt_index_file), "History.txt should exist where I expect it - {}".format(history_txt_index_file)
# Same like in model.Watch
with open(history_txt_index_file, "r") as f:
tmp_history = dict(i.strip().split(',', 2) for i in f.readlines())
assert len(tmp_history) == 1, "History.txt should contain 1 line"
# Should be two files,. the history.txt , and the snapshot.txt
files_in_watch_dir = os.listdir(os.path.join(live_server.app.config['DATASTORE'].datastore_path,
w))
# Find the snapshot one
for fname in files_in_watch_dir:
if fname != 'history.txt':
# contents should match what we requested as content returned from the test url
with open(os.path.join(live_server.app.config['DATASTORE'].datastore_path, w, fname), 'r') as snapshot_f:
contents = snapshot_f.read()
watch_url = json_obj['watching'][w]['url']
u = urlparse(watch_url)
q = parse_qs(u[4])
assert q['content'][0] == contents.strip(), "Snapshot file {} should contain {}".format(fname, q['content'][0])
assert len(files_in_watch_dir) == 2, "Should be just two files in the dir, history.txt and the snapshot"

View File

@@ -0,0 +1,38 @@
#!/usr/bin/python3
"""Test suite for the method to extract text from an html string"""
from ..html_tools import html_to_text
def test_html_to_text_func():
test_html = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
<a href="/first_link"> More Text </a>
</br>
So let's see what happens. </br>
<a href="second_link.com"> Even More Text </a>
</body>
</html>
"""
# extract text, with 'render_anchor_tag_content' set to False
text_content = html_to_text(test_html, render_anchor_tag_content=False)
no_links_text = \
"Some initial text\n\nWhich is across multiple " \
"lines\n\nMore Text So let's see what happens. Even More Text"
# check that no links are in the extracted text
assert text_content == no_links_text
# extract text, with 'render_anchor_tag_content' set to True
text_content = html_to_text(test_html, render_anchor_tag_content=True)
links_text = \
"Some initial text\n\nWhich is across multiple lines\n\n[ More Text " \
"](/first_link) So let's see what happens. [ Even More Text ]" \
"(second_link.com)"
# check that links are present in the extracted text
assert text_content == links_text

View File

@@ -3,6 +3,7 @@
import time
from flask import url_for
from . util import live_server_setup
from changedetectionio import html_tools
def test_setup(live_server):
live_server_setup(live_server)
@@ -23,7 +24,7 @@ def test_strip_regex_text_func():
ignore_lines = ["sometimes", "/\s\d{2,3}\s/", "/ignore-case text/"]
fetcher = fetch_site_status.perform_site_check(datastore=False)
stripped_content = fetcher.strip_ignore_text(test_content, ignore_lines)
stripped_content = html_tools.strip_ignore_text(test_content, ignore_lines)
assert b"but 1 lines" in stripped_content
assert b"igNORe-cAse text" not in stripped_content

View File

@@ -3,6 +3,7 @@
import time
from flask import url_for
from . util import live_server_setup
from changedetectionio import html_tools
def test_setup(live_server):
live_server_setup(live_server)
@@ -23,7 +24,7 @@ def test_strip_text_func():
ignore_lines = ["sometimes"]
fetcher = fetch_site_status.perform_site_check(datastore=False)
stripped_content = fetcher.strip_ignore_text(test_content, ignore_lines)
stripped_content = html_tools.strip_ignore_text(test_content, ignore_lines)
assert b"sometimes" not in stripped_content
assert b"Some content" in stripped_content
@@ -52,6 +53,8 @@ def set_modified_original_ignore_response():
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
<p>new ignore stuff</p>
<p>blah</p>
</body>
</html>
@@ -67,7 +70,7 @@ def set_modified_ignore_response():
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
<P>ZZZZZ</P>
<P>ZZZZz</P>
</br>
So let's see what happens. </br>
</body>
@@ -82,7 +85,8 @@ def set_modified_ignore_response():
def test_check_ignore_text_functionality(client, live_server):
sleep_time_for_fetch_thread = 3
ignore_text = "XXXXX\r\nYYYYY\r\nZZZZZ"
# Use a mix of case in ZzZ to prove it works case-insensitive.
ignore_text = "XXXXX\r\nYYYYY\r\nzZzZZ\r\nnew ignore stuff"
set_original_ignore_response()
# Give the endpoint time to spin up
@@ -98,7 +102,7 @@ def test_check_ignore_text_functionality(client, live_server):
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -119,7 +123,7 @@ def test_check_ignore_text_functionality(client, live_server):
assert bytes(ignore_text.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -133,7 +137,7 @@ def test_check_ignore_text_functionality(client, live_server):
set_modified_ignore_response()
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
@@ -142,12 +146,115 @@ def test_check_ignore_text_functionality(client, live_server):
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data
# Just to be sure.. set a regular modified change..
set_modified_original_ignore_response()
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Check the preview/highlighter, we should be able to see what we ignored, but it should be highlighted
# We only introduce the "modified" content that includes what we ignore so we can prove the newest version also displays
# at /preview
res = client.get(url_for("preview_page", uuid="first"))
# We should be able to see what we ignored
assert b'<div class="ignored">new ignore stuff' in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data
def test_check_global_ignore_text_functionality(client, live_server):
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
ignore_text = "XXXXX\r\nYYYYY\r\nZZZZZ"
set_original_ignore_response()
# Goto the settings page, add our ignore text
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-global_ignore_text": ignore_text,
'application-fetch_backend': "html_requests"
},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page of the item, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"ignore_text": "something irrelevent but just to check", "url": test_url, 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Check it saved
res = client.get(
url_for("settings_page"),
)
assert bytes(ignore_text.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# so that we are sure everything is viewed and in a known 'nothing changed' state
res = client.get(url_for("diff_history_page", uuid="first"))
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data
# Make a change which includes the ignore text
set_modified_ignore_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data
# Just to be sure.. set a regular modified change that will trigger it
set_modified_original_ignore_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,125 @@
#!/usr/bin/python3
"""Test suite for the render/not render anchor tag content functionality"""
import time
from flask import url_for
from .util import live_server_setup
def test_setup(live_server):
live_server_setup(live_server)
def set_original_ignore_response():
test_return_data = """<html>
<body>
Some initial text</br>
<a href="/original_link"> Some More Text </a>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
# Should be the same as set_original_ignore_response() but with a different
# link
def set_modified_ignore_response():
test_return_data = """<html>
<body>
Some initial text</br>
<a href="/modified_link"> Some More Text </a>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def test_render_anchor_tag_content_true(client, live_server):
"""Testing that the link changes are detected when
render_anchor_tag_content setting is set to true"""
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
# set original html text
set_original_ignore_response()
# Goto the settings page, choose to ignore links (dont select/send "application-render_anchor_tag_content")
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-fetch_backend": "html_requests",
},
follow_redirects=True,
)
assert b"Settings updated." in res.data
# Add our URL to the import page
test_url = url_for("test_endpoint", _external=True)
res = client.post(
url_for("import_page"), data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# set a new html text with a modified link
set_modified_ignore_response()
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# We should not see the rendered anchor tag
res = client.get(url_for("preview_page", uuid="first"))
assert '(/modified_link)' not in res.data.decode()
# Goto the settings page, ENABLE render anchor tag
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-render_anchor_tag_content": "true",
"application-fetch_backend": "html_requests",
},
follow_redirects=True,
)
assert b"Settings updated." in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# check that the anchor tag content is rendered
res = client.get(url_for("preview_page", uuid="first"))
assert '(/modified_link)' in res.data.decode()
# since the link has changed, and we chose to render anchor tag content,
# we should detect a change (new 'unviewed' class)
res = client.get(url_for("index"))
assert b"unviewed" in res.data
assert b"/test-endpoint" in res.data
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"),
follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -0,0 +1,190 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
def test_setup(live_server):
live_server_setup(live_server)
def set_original_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def set_some_changed_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines, and a new thing too.</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def test_normal_page_check_works_with_ignore_status_code(client, live_server):
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
set_original_response()
# Goto the settings page, add our ignore text
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-ignore_status_codes": "y",
'application-fetch_backend': "html_requests"
},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
set_some_changed_response()
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
assert b'/test-endpoint' in res.data
# Tests the whole stack works with staus codes ignored
def test_403_page_check_works_with_ignore_status_code(client, live_server):
sleep_time_for_fetch_thread = 3
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', status_code=403, _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, check our ignore option
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"ignore_status_codes": "y", "url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Make a change
set_some_changed_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should have 'unviewed' still
# Because it should be looking at only that 'sametext' id
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Tests the whole stack works with staus codes ignored
def test_403_page_check_fails_without_ignore_status_code(client, live_server):
sleep_time_for_fetch_thread = 3
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', status_code=403, _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Goto the edit page, check our ignore option
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"url": test_url, "tag": "", "headers": "", 'fetch_backend': "html_requests"},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# Make a change
set_some_changed_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should have 'unviewed' still
# Because it should be looking at only that 'sametext' id
res = client.get(url_for("index"))
assert b'Status Code 403' in res.data

View File

@@ -0,0 +1,96 @@
#!/usr/bin/python3
import time
from flask import url_for
from . util import live_server_setup
def test_setup(live_server):
live_server_setup(live_server)
# Should be the same as set_original_ignore_response() but with a little more whitespacing
def set_original_ignore_response_but_with_whitespace():
test_return_data = """<html>
<body>
Some initial text</br>
<p>
Which is across multiple lines</p>
<br>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
def set_original_ignore_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
# If there was only a change in the whitespacing, then we shouldnt have a change detected
def test_check_ignore_whitespace(client, live_server):
sleep_time_for_fetch_thread = 3
# Give the endpoint time to spin up
time.sleep(1)
set_original_ignore_response()
# Goto the settings page, add our ignore text
res = client.post(
url_for("settings_page"),
data={
"requests-time_between_check-minutes": 180,
"application-ignore_whitespace": "y",
"application-fetch_backend": "html_requests"
},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
set_original_ignore_response_but_with_whitespace()
time.sleep(sleep_time_for_fetch_thread)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'/test-endpoint' in res.data

View File

@@ -0,0 +1,120 @@
#!/usr/bin/python3
import time
from flask import url_for
from .util import live_server_setup
def test_setup(client, live_server):
live_server_setup(live_server)
def test_import(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
res = client.post(
url_for("import_page"),
data={
"distill-io": "",
"urls": """https://example.com
https://example.com tag1
https://example.com tag1, other tag"""
},
follow_redirects=True,
)
assert b"3 Imported" in res.data
assert b"tag1" in res.data
assert b"other tag" in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get( url_for("index"))
res = client.get( url_for("index"))
def xtest_import_skip_url(client, live_server):
# Give the endpoint time to spin up
time.sleep(1)
res = client.post(
url_for("import_page"),
data={
"distill-io": "",
"urls": """https://example.com
:ht000000broken
"""
},
follow_redirects=True,
)
assert b"1 Imported" in res.data
assert b"ht000000broken" in res.data
assert b"1 Skipped" in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get( url_for("index"))
def test_import_distillio(client, live_server):
distill_data='''
{
"client": {
"local": 1
},
"data": [
{
"name": "Unraid | News",
"uri": "https://unraid.net/blog",
"config": "{\\"selections\\":[{\\"frames\\":[{\\"index\\":0,\\"excludes\\":[],\\"includes\\":[{\\"type\\":\\"xpath\\",\\"expr\\":\\"(//div[@id='App']/div[contains(@class,'flex')]/main[contains(@class,'relative')]/section[contains(@class,'relative')]/div[@class='container']/div[contains(@class,'flex')]/div[contains(@class,'w-full')])[1]\\"}]}],\\"dynamic\\":true,\\"delay\\":2}],\\"ignoreEmptyText\\":true,\\"includeStyle\\":false,\\"dataAttr\\":\\"text\\"}",
"tags": ["nice stuff", "nerd-news"],
"content_type": 2,
"state": 40,
"schedule": "{\\"type\\":\\"INTERVAL\\",\\"params\\":{\\"interval\\":4447}}",
"ts": "2022-03-27T15:51:15.667Z"
}
]
}
'''
# Give the endpoint time to spin up
time.sleep(1)
client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
res = client.post(
url_for("import_page"),
data={
"distill-io": distill_data,
"urls" : ''
},
follow_redirects=True,
)
assert b"Unable to read JSON file, was it broken?" not in res.data
assert b"1 Imported from Distill.io" in res.data
res = client.get( url_for("edit_page", uuid="first"))
assert b"https://unraid.net/blog" in res.data
assert b"Unraid | News" in res.data
# flask/wtforms should recode this, check we see it
# wtforms encodes it like id=&#39 ,but html.escape makes it like id=&#x27
# - so just check it manually :(
#import json
#import html
#d = json.loads(distill_data)
# embedded_d=json.loads(d['data'][0]['config'])
# x=html.escape(embedded_d['selections'][0]['frames'][0]['includes'][0]['expr']).encode('utf-8')
assert b"xpath:(//div[@id=&#39;App&#39;]/div[contains(@class,&#39;flex&#39;)]/main[contains(@class,&#39;relative&#39;)]/section[contains(@class,&#39;relative&#39;)]/div[@class=&#39;container&#39;]/div[contains(@class,&#39;flex&#39;)]/div[contains(@class,&#39;w-full&#39;)])[1]" in res.data
# did the tags work?
res = client.get( url_for("index"))
assert b"nice stuff" in res.data
assert b"nerd-news" in res.data
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
# Clear flask alerts
res = client.get(url_for("index"))

View File

@@ -1,10 +1,15 @@
#!/usr/bin/python3
# coding=utf-8
import time
from flask import url_for
from . util import live_server_setup
import pytest
def test_setup(live_server):
live_server_setup(live_server)
def test_unittest_inline_html_extract():
# So lets pretend that the JSON we want is inside some HTML
content="""
@@ -42,6 +47,45 @@ and it can also be repeated
with pytest.raises(html_tools.JSONNotFound) as e_info:
html_tools.extract_json_as_string('COMPLETE GIBBERISH, NO JSON!', "$.id")
def set_original_ext_response():
data = """
[
{
"isPriceLowered": false,
"status": "ForSale",
"statusOrig": "for sale"
},
{
"_id": "5e7b3e1fb3262d306323ff1e",
"listingsType": "consumer",
"status": "ForSale",
"statusOrig": "for sale"
}
]
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(data)
def set_modified_ext_response():
data = """
[
{
"isPriceLowered": false,
"status": "Sold",
"statusOrig": "sold"
},
{
"_id": "5e7b3e1fb3262d306323ff1e",
"listingsType": "consumer",
"isPriceLowered": false,
"status": "Sold"
}
]
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(data)
def set_original_response():
test_return_data = """
@@ -60,7 +104,23 @@ def set_original_response():
],
"boss": {
"name": "Fat guy"
}
},
"available": true
}
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def set_response_with_html():
test_return_data = """
{
"test": [
{
"html": "<b>"
}
]
}
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
@@ -83,8 +143,9 @@ def set_modified_response():
}
],
"boss": {
"name": "Foobar"
}
"name": "Örnsköldsvik"
},
"available": false
}
"""
@@ -93,11 +154,38 @@ def set_modified_response():
return None
def test_check_json_without_filter(client, live_server):
# Request a JSON document from a application/json source containing HTML
# and be sure it doesn't get chewed up by instriptis
set_response_with_html()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', content_type="application/json", _external=True)
client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
res = client.get(
url_for("preview_page", uuid="first"),
follow_redirects=True
)
assert b'&#34;&lt;b&gt;' in res.data
assert res.data.count(b'{\n') >= 2
def test_check_json_filter(client, live_server):
live_server_setup(live_server)
json_filter = 'json:boss.name'
set_original_response()
@@ -106,7 +194,7 @@ def test_check_json_filter(client, live_server):
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
test_url = url_for('test_endpoint', content_type="application/json", _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
@@ -115,7 +203,7 @@ def test_check_json_filter(client, live_server):
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
@@ -141,7 +229,7 @@ def test_check_json_filter(client, live_server):
assert bytes(json_filter.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
@@ -149,7 +237,7 @@ def test_check_json_filter(client, live_server):
set_modified_response()
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(4)
@@ -159,5 +247,132 @@ def test_check_json_filter(client, live_server):
# Should not see this, because its not in the JSONPath we entered
res = client.get(url_for("diff_history_page", uuid="first"))
# But the change should be there, tho its hard to test the change was detected because it will show old and new versions
assert b'Foobar' in res.data
# And #462 - check we see the proper utf-8 string there
assert "Örnsköldsvik".encode('utf-8') in res.data
def test_check_json_filter_bool_val(client, live_server):
json_filter = "json:$['available']"
set_original_response()
# Give the endpoint time to spin up
time.sleep(1)
test_url = url_for('test_endpoint', content_type="application/json", _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(3)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": json_filter,
"url": test_url,
"tag": "",
"headers": "",
"fetch_backend": "html_requests"
},
follow_redirects=True
)
assert b"Updated watch." in res.data
time.sleep(3)
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
# Make a change
set_modified_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
res = client.get(url_for("diff_history_page", uuid="first"))
# But the change should be there, tho its hard to test the change was detected because it will show old and new versions
assert b'false' in res.data
# Re #265 - Extended JSON selector test
# Stuff to consider here
# - Selector should be allowed to return empty when it doesnt match (people might wait for some condition)
# - The 'diff' tab could show the old and new content
# - Form should let us enter a selector that doesnt (yet) match anything
def test_check_json_ext_filter(client, live_server):
json_filter = 'json:$[?(@.status==Sold)]'
set_original_ext_response()
# Give the endpoint time to spin up
time.sleep(1)
# Add our URL to the import page
test_url = url_for('test_endpoint', content_type="application/json", _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
follow_redirects=True
)
assert b"1 Imported" in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
# Goto the edit page, add our ignore text
# Add our URL to the import page
res = client.post(
url_for("edit_page", uuid="first"),
data={"css_filter": json_filter,
"url": test_url,
"tag": "",
"headers": "",
"fetch_backend": "html_requests"
},
follow_redirects=True
)
assert b"Updated watch." in res.data
# Check it saved
res = client.get(
url_for("edit_page", uuid="first"),
)
assert bytes(json_filter.encode('utf-8')) in res.data
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
# Make a change
set_modified_ext_response()
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(4)
# It should have 'unviewed'
res = client.get(url_for("index"))
assert b'unviewed' in res.data
res = client.get(url_for("diff_history_page", uuid="first"))
# We should never see 'ForSale' because we are selecting on 'Sold' in the rule,
# But we should know it triggered ('unviewed' assert above)
assert b'ForSale' not in res.data
assert b'Sold' in res.data

View File

@@ -0,0 +1,102 @@
#!/usr/bin/python3
import time
from flask import url_for
from urllib.request import urlopen
from .util import set_original_response, set_modified_response, live_server_setup
sleep_time_for_fetch_thread = 3
def set_nonrenderable_response():
test_return_data = """<html>
<head><title>modified head title</title></head>
<!-- like when some angular app was broken and doesnt render or whatever -->
<body>
</body>
</html>
"""
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(test_return_data)
return None
def test_check_basic_change_detection_functionality(client, live_server):
set_original_response()
live_server_setup(live_server)
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": url_for('test_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Do this a few times.. ensures we dont accidently set the status
for n in range(3):
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
#####################
client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
# this should not trigger a change, because no good text could be converted from the HTML
set_nonrenderable_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
# ok now do the opposite
client.post(
url_for("settings_page"),
data={"application-empty_pages_are_a_change": "y",
"requests-time_between_check-minutes": 180,
'application-fetch_backend': "html_requests"},
follow_redirects=True
)
set_modified_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
#
# Cleanup everything
res = client.get(url_for("form_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

View File

@@ -2,26 +2,45 @@ import os
import time
import re
from flask import url_for
from . util import set_original_response, set_modified_response, live_server_setup
from . util import set_original_response, set_modified_response, set_more_modified_response, live_server_setup
import logging
from changedetectionio.notification import default_notification_body, default_notification_title
def test_setup(live_server):
live_server_setup(live_server)
# Hard to just add more live server URLs when one test is already running (I think)
# So we add our test here (was in a different file)
def test_check_notification(client, live_server):
live_server_setup(live_server)
set_original_response()
# Give the endpoint time to spin up
time.sleep(3)
# Re 360 - new install should have defaults set
res = client.get(url_for("settings_page"))
assert default_notification_body.encode() in res.data
assert default_notification_title.encode() in res.data
# When test mode is in BASE_URL env mode, we should see this already configured
env_base_url = os.getenv('BASE_URL', '').strip()
if len(env_base_url):
logging.debug(">>> BASE_URL enabled, looking for %s", env_base_url)
res = client.get(url_for("settings_page"))
assert bytes(env_base_url.encode('utf-8')) in res.data
else:
logging.debug(">>> SKIPPING BASE_URL check")
# re #242 - when you edited an existing new entry, it would not correctly show the notification settings
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("import_page"),
data={"urls": test_url},
url_for("form_quick_watch_add"),
data={"url": test_url, "tag": ''},
follow_redirects=True
)
assert b"1 Imported" in res.data
assert b"Watch added" in res.data
# Give the thread time to pick up the first version
time.sleep(3)
@@ -32,142 +51,167 @@ def test_check_notification(client, live_server):
notification_url = url.replace('http', 'json')
print (">>>> Notification URL: "+notification_url)
notification_form_data = {"notification_urls": notification_url,
"notification_title": "New ChangeDetection.io Notification - {watch_url}",
"notification_body": "BASE URL: {base_url}\n"
"Watch URL: {watch_url}\n"
"Watch UUID: {watch_uuid}\n"
"Watch title: {watch_title}\n"
"Watch tag: {watch_tag}\n"
"Preview: {preview_url}\n"
"Diff URL: {diff_url}\n"
"Snapshot: {current_snapshot}\n"
"Diff: {diff}\n"
"Diff Full: {diff_full}\n"
":-)",
"notification_format": "Text"}
notification_form_data.update({
"url": test_url,
"tag": "my tag",
"title": "my title",
"headers": "",
"fetch_backend": "html_requests"})
res = client.post(
url_for("edit_page", uuid="first"),
data={"notification_urls": notification_url,
"notification_title": "New ChangeDetection.io Notification - {watch_url}",
"notification_body": "BASE URL: {base_url}\n"
"Watch URL: {watch_url}\n"
"Watch UUID: {watch_uuid}\n"
"Watch title: {watch_title}\n"
"Watch tag: {watch_tag}\n"
"Preview: {preview_url}\n"
"Diff URL: {diff_url}\n"
"Snapshot: {current_snapshot}\n"
":-)",
"url": test_url,
"tag": "my tag",
"title": "my title",
"headers": "",
"fetch_backend": "html_requests",
"trigger_check": "y"},
data=notification_form_data,
follow_redirects=True
)
assert b"Updated watch." in res.data
assert b"Notifications queued" in res.data
# Hit the edit page, be sure that we saved it
# Re #242 - wasnt saving?
res = client.get(
url_for("edit_page", uuid="first"))
assert bytes(notification_url.encode('utf-8')) in res.data
assert bytes("New ChangeDetection.io Notification".encode('utf-8')) in res.data
# Because we hit 'send test notification on save'
## Now recheck, and it should have sent the notification
time.sleep(3)
set_modified_response()
notification_submission = None
# Trigger a check
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
# Verify what was sent as a notification, this file should exist
with open("test-datastore/notification.txt", "r") as f:
notification_submission = f.read()
# Did we see the URL that had a change, in the notification?
assert test_url in notification_submission
os.unlink("test-datastore/notification.txt")
set_modified_response()
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(3)
# Did the front end see it?
res = client.get(
url_for("index"))
assert bytes("just now".encode('utf-8')) in res.data
notification_submission=None
# Verify what was sent as a notification
with open("test-datastore/notification.txt", "r") as f:
notification_submission = f.read()
# Did we see the URL that had a change, in the notification?
# Did we see the URL that had a change, in the notification?
# Diff was correctly executed
assert test_url in notification_submission
assert ':-)' in notification_submission
assert "Diff Full: Some initial text" in notification_submission
assert "Diff: (changed) Which is across multiple lines" in notification_submission
assert "(into ) which has this one new line" in notification_submission
# Re #342 - check for accidental python byte encoding of non-utf8/string
assert "b'" not in notification_submission
assert re.search('Watch UUID: [0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}', notification_submission, re.IGNORECASE)
assert "Watch title: my title" in notification_submission
assert "Watch tag: my tag" in notification_submission
assert "diff/" in notification_submission
assert "preview/" in notification_submission
assert ":-)" in notification_submission
assert "New ChangeDetection.io Notification - {}".format(test_url) in notification_submission
# Re #65 - did we see our foobar.com BASE_URL ?
#assert bytes("https://foobar.com".encode('utf-8')) in notification_submission
if env_base_url:
# Re #65 - did we see our BASE_URl ?
logging.debug (">>> BASE_URL checking in notification: %s", env_base_url)
assert env_base_url in notification_submission
else:
logging.debug(">>> Skipping BASE_URL check")
## Now configure something clever, we go into custom config (non-default) mode, this is returned by the endpoint
with open("test-datastore/endpoint-content.txt", "w") as f:
f.write(";jasdhflkjadshf kjhsdfkjl ahslkjf haslkjd hfaklsj hf\njl;asdhfkasj stuff we will detect\n")
res = client.post(
url_for("settings_page"),
data={"notification_title": "New ChangeDetection.io Notification - {watch_url}",
"notification_urls": "json://foobar.com", #Re #143 should not see that it sent without [test checkbox]
"minutes_between_check": 180,
"fetch_backend": "html_requests",
},
follow_redirects=True
)
assert b"Settings updated." in res.data
# Re #143 - should not see this if we didnt hit the test box
assert b"Notifications queued" not in res.data
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
# This should insert the {current_snapshot}
set_more_modified_response()
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(3)
# Did the front end see it?
res = client.get(
url_for("index"))
assert bytes("just now".encode('utf-8')) in res.data
# Verify what was sent as a notification, this file should exist
with open("test-datastore/notification.txt", "r") as f:
notification_submission = f.read()
assert "Ohh yeah awesome" in notification_submission
assert re.search('Watch UUID: [0-9a-f]{8}(-[0-9a-f]{4}){3}-[0-9a-f]{12}', notification_submission, re.IGNORECASE)
assert "Watch title: my title" in notification_submission
assert "Watch tag: my tag" in notification_submission
assert "diff/" in notification_submission
assert "preview/" in notification_submission
assert ":-)" in notification_submission
assert "New ChangeDetection.io Notification - {}".format(test_url) in notification_submission
# This should insert the {current_snapshot}
assert "stuff we will detect" in notification_submission
# Prove that "content constantly being marked as Changed with no Updating causes notification" is not a thing
# https://github.com/dgtlmoon/changedetection.io/discussions/192
os.unlink("test-datastore/notification.txt")
# Trigger a check
client.get(url_for("api_watch_checknow"), follow_redirects=True)
time.sleep(3)
client.get(url_for("api_watch_checknow"), follow_redirects=True)
time.sleep(3)
client.get(url_for("api_watch_checknow"), follow_redirects=True)
time.sleep(3)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(1)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(1)
client.get(url_for("form_watch_checknow"), follow_redirects=True)
time.sleep(1)
assert os.path.exists("test-datastore/notification.txt") == False
res = client.get(url_for("notification_logs"))
# be sure we see it in the output log
assert b'New ChangeDetection.io Notification - ' + test_url.encode('utf-8') in res.data
# cleanup for the next
client.get(
url_for("form_delete", uuid="all"),
follow_redirects=True
)
def test_notification_validation(client, live_server):
#live_server_setup(live_server)
time.sleep(3)
# re #242 - when you edited an existing new entry, it would not correctly show the notification settings
# Add our URL to the import page
test_url = url_for('test_endpoint', _external=True)
res = client.post(
url_for("form_quick_watch_add"),
data={"url": test_url, "tag": 'nice one'},
follow_redirects=True
)
assert b"Watch added" in res.data
# Re #360 some validation
res = client.post(
url_for("edit_page", uuid="first"),
data={"notification_urls": 'json://localhost/foobar',
"notification_title": "",
"notification_body": "",
"notification_format": "Text",
"url": test_url,
"tag": "my tag",
"title": "my title",
"headers": "",
"fetch_backend": "html_requests"},
follow_redirects=True
)
assert b"Notification Body and Title is required when a Notification URL is used" in res.data
# Now adding a wrong token should give us an error
res = client.post(
url_for("settings_page"),
data={"notification_title": "New ChangeDetection.io Notification - {watch_url}",
"notification_body": "Rubbish: {rubbish}\n",
"notification_urls": "json://foobar.com",
"minutes_between_check": 180,
data={"application-notification_title": "New ChangeDetection.io Notification - {watch_url}",
"application-notification_body": "Rubbish: {rubbish}\n",
"application-notification_format": "Text",
"application-notification_urls": "json://localhost/foobar",
"requests-time_between_check-minutes": 180,
"fetch_backend": "html_requests"
},
follow_redirects=True
)
assert bytes("is not a valid token".encode('utf-8')) in res.data
assert bytes("is not a valid token".encode('utf-8')) in res.data
# cleanup for the next
client.get(
url_for("form_delete", uuid="all"),
follow_redirects=True
)

Some files were not shown because too many files have changed in this diff Show More