Compare commits

...

1562 Commits

Author SHA1 Message Date
dgtlmoon
3078218cfb Fix unset default var
Some checks failed
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2025-01-29 10:20:55 +01:00
dgtlmoon
92ce7d29b6 be sur eto clean off earlier watches
Some checks are pending
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Waiting to run
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Blocked by required conditions
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Blocked by required conditions
ChangeDetection.io App Test / lint-code (push) Waiting to run
ChangeDetection.io App Test / test-application-3-10 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-11 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-12 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-13 (push) Blocked by required conditions
2025-01-28 18:21:27 +01:00
dgtlmoon
06350b1a8c Merge branch 'master' into fix-mixed-html-alerts 2025-01-28 18:15:19 +01:00
dgtlmoon
b1e700b3ff Adding jinja2/browsersteps test (#2915)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
CodeQL / Analyze (javascript) (push) Has been cancelled
CodeQL / Analyze (python) (push) Has been cancelled
2025-01-28 18:14:49 +01:00
dgtlmoon
d632647574 Increase timeout, maybe github will pass more reliably
Some checks failed
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Waiting to run
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Blocked by required conditions
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Blocked by required conditions
ChangeDetection.io App Test / lint-code (push) Waiting to run
ChangeDetection.io App Test / test-application-3-10 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-11 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-12 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-13 (push) Blocked by required conditions
ChangeDetection.io Container Build Test / test-container-build (push) Has been cancelled
2025-01-27 17:27:05 +01:00
dgtlmoon
40907f1658 Merge branch 'master' into fix-mixed-html-alerts 2025-01-27 17:25:02 +01:00
Iftekhar Alam Fuad
1c61b5a623 Header handling - Fix header parsing to split on the first colon only (headers where the value contained :// type may have been broken) (#2929)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2025-01-26 00:08:09 +01:00
dgtlmoon
e799a1cdcb 0.49.00
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io Container Build Test / test-container-build (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
CodeQL / Analyze (javascript) (push) Has been cancelled
CodeQL / Analyze (python) (push) Has been cancelled
2025-01-21 13:40:01 +01:00
dgtlmoon
938065db6f Update README.md
Some checks are pending
Build and push containers / metadata (push) Waiting to run
Build and push containers / build-push-containers (push) Waiting to run
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Waiting to run
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Blocked by required conditions
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Blocked by required conditions
ChangeDetection.io App Test / lint-code (push) Waiting to run
ChangeDetection.io App Test / test-application-3-10 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-11 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-12 (push) Blocked by required conditions
ChangeDetection.io App Test / test-application-3-13 (push) Blocked by required conditions
2025-01-20 16:10:54 +01:00
dgtlmoon
4f2d38ff49 Build/Libraries - Pin referencing library which breaks due to out-dated flask_expects_json, remove pip upgrade in test(#2912)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io Container Build Test / test-container-build (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
2025-01-18 23:20:58 +01:00
dgtlmoon
8960f401b7 Notifications - Custom POST:// GET:// etc endpoints - returning 204 and other 20x responses are OK (don't show an error was detected)(#2897)
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built 📦 package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
CodeQL / Analyze (javascript) (push) Has been cancelled
CodeQL / Analyze (python) (push) Has been cancelled
2025-01-13 13:13:18 +01:00
dgtlmoon
1c1f1c6f6b 0.48.06 2025-01-09 23:02:29 +01:00
dgtlmoon
8604ab57a8 Align test and code 2025-01-09 23:01:35 +01:00
dgtlmoon
a2a98811a5 Restock - Add test for new lower/higher price notification Re #2715 (#2892) 2025-01-09 22:59:55 +01:00
dgtlmoon
52926efbd1 Re #2866 - Make sure any HTML type notifications have their content escaped - except for our added/remove/changed markup 2025-01-09 22:43:52 +01:00
dgtlmoon
5a0ef8fc01 Update integration test for "linuxserver" test build (#2891) 2025-01-09 21:36:39 +01:00
dgtlmoon
d90de0851d Notifications - Update Apprise to 1.9.2 - Fixes custom posts:// gets:// etc URL's being double-encoded, fixes chantify:// notifications (#2868) (#2875) (#2870) 2025-01-09 21:16:32 +01:00
dgtlmoon
360b4f0d8b Custom posts:// get:// notifications etc - Be sure our custom extensions are imported (#2890) 2025-01-09 21:10:09 +01:00
dgtlmoon
6fc04d7f1c "Send test notification" button - Easier to understand test send results, Improved error handling, code refactor (#2888) 2025-01-08 14:35:41 +01:00
dgtlmoon
66fb05527b Improve last_checked vs last_changed time information precision (#2883) 2025-01-06 20:38:50 +01:00
William Brawner
202e47d728 Update Apprise to 1.9.1 (#2876) 2025-01-02 20:06:25 +01:00
Florian Kretschmer
d67d396b88 Builder/Docker - Remove PUID and PGID ( they were not used ) (#2852) 2024-12-27 13:03:36 +01:00
MoshiMoshi0
05f54f0ce6 UI - Fix diff not starting from last viewed snapshot (#2744) (#2856) 2024-12-27 13:03:10 +01:00
dgtlmoon
6adf10597e 0.48.05 2024-12-27 11:24:56 +01:00
dgtlmoon
4419bc0e61 Fixing test for CVE-2024-56509 (#2864) 2024-12-27 11:09:52 +01:00
dgtlmoon
f7e9846c9b CVE-2024-56509 - Stricter file protocol checking pre-check ( Improper Input Validation Leading to LFR/Path Traversal when fetching file:.. ) 2024-12-27 09:26:28 +01:00
dgtlmoon
5dea5e1def 0.48.04 2024-12-16 21:50:53 +01:00
dgtlmoon
0fade0a473 Windows was sometimes missing timezone data (#2845 #2826) 2024-12-16 21:50:28 +01:00
dgtlmoon
121e9c20e0 0.48.03 2024-12-16 16:14:03 +01:00
dgtlmoon
12cec2d541 0.48.02 2024-12-16 16:10:47 +01:00
dgtlmoon
d52e6e8e11 Notifications - "Send test" was not always following "System default notification format" (#2844) 2024-12-16 15:50:07 +01:00
dgtlmoon
bae1a89b75 Notifications - Default notification format (for new installs) now "HTML color" (#2843) 2024-12-16 14:55:10 +01:00
dgtlmoon
e49711f449 Notification - HTML Color format notification colors should be same as UI, {{diff_full}} token should also get HTML colors ( #2842 #2554 ) 2024-12-16 14:46:39 +01:00
dgtlmoon
a3a3ab0622 Notifcations - Adding "HTML Color" notification format option (#2837) 2024-12-13 11:21:39 +01:00
dgtlmoon
c5fe188b28 UI - Make 'tag' sticky - redirect to current tag on edit or add watch (#2824 #2785) 2024-12-04 18:25:26 +01:00
dgtlmoon
1fb0adde54 Notifications - Support for commented out notification URLs (#2825 #2769) 2024-12-04 18:08:52 +01:00
dgtlmoon
2614b275f0 Docs - Adding information to README.md about the new scheduler 2024-12-04 08:52:40 +01:00
dgtlmoon
1631a55830 0.48.01 2024-12-03 18:44:20 +01:00
dgtlmoon
f00b8e4efb UI - Fixing scheduler options 2024-12-03 18:11:14 +01:00
dgtlmoon
179ca171d4 0.48.00 2024-12-03 14:26:01 +01:00
Tyler Schrock
84f2870d4f Fix HIDE_REFERER env option for hiding changedetection.io from referer headers (#2787) 2024-12-03 12:54:58 +01:00
dgtlmoon
7421e0f95e New functionality - Time (weekday + time) scheduler / duration (#2802) 2024-12-03 12:45:28 +01:00
Taylan Tatlı
c6162e48f1 Add Turkish phrases for out-of-stock detection (#2809) 2024-11-28 11:28:22 +01:00
dgtlmoon
feccb18cdc UI - Always use UTC timezone for storing data, show local timezone (#2799) 2024-11-21 08:58:26 +01:00
dgtlmoon
1462ad89ac Update stock-not-in-stock.js 2024-11-20 15:20:53 +01:00
Kenny Root
cfb9fadec8 Python 3.13 compatibility (#2791) 2024-11-20 09:41:56 +01:00
Kenny Root
d9f9fa735d Code - Update .gitignore and .dockerignore (#2797) 2024-11-20 09:41:32 +01:00
dgtlmoon
6084b0f23d VisualSelector - Use 'deflate' for storing elements.json, 90% file size reduction (#2794) 2024-11-19 17:28:21 +01:00
dgtlmoon
4e18aea5ff UI - Show local timezone info in settings (for future functionality) #2793 2024-11-19 15:44:50 +01:00
dgtlmoon
fdba6b5566 Notification - Locking paho-mqtt:// version fix 2024-11-19 14:40:02 +01:00
dgtlmoon
4e6c783c45 Update COMMERCIAL_LICENCE.md 2024-11-18 16:44:03 +01:00
dgtlmoon
0f0f5af7b5 Ability to disable version check (set DISABLE_VERSION_CHECK=true) Re #2773 (#2775) 2024-11-10 15:48:05 +01:00
dgtlmoon
7fcba26bea Minor improvement for queue management 2024-11-10 11:06:47 +01:00
dgtlmoon
4bda1a234f Update bug_report.md 2024-11-10 10:13:22 +01:00
dgtlmoon
d297850539 Security - Fix test 2024-11-07 20:10:02 +01:00
dgtlmoon
751239250f Security check - improve test 2024-11-07 19:41:48 +01:00
dgtlmoon
6aceeb01ab 0.47.06 2024-11-07 18:47:18 +01:00
dgtlmoon
49bc982c69 CVE-2024-51998 - file:/ path traversal access should not be allowed to access a file without ALLOW_FILE_URI set 2024-11-07 18:45:19 +01:00
Arthur Nogueira Neves
e0abf0b505 Update docker-compose.yml (#2767) 2024-11-06 18:41:55 +01:00
dgtlmoon
f08a1185aa Price tracker - fix for sites that supply an empty additional price (#2758) 2024-11-01 10:56:27 +01:00
dgtlmoon
ad5d7efbbf Testing - Pinning werkzeug (#2757) 2024-11-01 10:23:34 +01:00
dgtlmoon
7029d10f8b 0.47.05 2024-10-31 22:51:03 +01:00
dgtlmoon
26d3a23e05 CVE-2024-51483 - Fix for limiting access to file:// via source:file:///tmp/file.txt when using webdriver/playwright 2024-10-31 22:49:31 +01:00
dgtlmoon
942625e1fb Backups - Hide incomplete/running backups from being downloaded 2024-10-31 10:58:41 +01:00
dgtlmoon
33c83230a6 Backups - Backups now operate in the background, provide a nice UI to access/download previous backups (#2755) 2024-10-31 10:34:59 +01:00
dgtlmoon
87510becb5 Filters - Process all CSS and XPath 'subtract' selectors in a single pass to prevent index shifting and reference loss during DOM manipulation. (#2754) 2024-10-30 12:00:53 +01:00
dgtlmoon
5e95dc62a5 0.47.04 2024-10-29 08:25:05 +01:00
dgtlmoon
7d94535dbf Do not recheck 'paused' watches on edit/save (Re #2747 #2750) 2024-10-29 08:24:15 +01:00
dgtlmoon
563c196396 Notification post:// get:// etc - Fixing URL encoding of headers so that '+' in URL is correctly parsed as ' ' (and other url-encodings) (#2745) 2024-10-28 16:59:49 +01:00
Christopher Charbonneau Wells
e8b82c47ca #2502 - Add jinja2 template handling to request body and headers (#2740) 2024-10-28 15:46:05 +01:00
Gonçalo Silva
e84de7e8f4 Restock detection - Add additional out-of-stock detection for PT language (#2738) 2024-10-24 20:03:14 +02:00
dgtlmoon
1543edca24 "Send test notification" in "Restock" mode was not working correclty when restock tokens "{{restock.price}}" were in the notification body (#2737) 2024-10-24 19:46:45 +02:00
dgtlmoon
82e0b99b07 #2727 Notifications - Fix "send test notification" on empty list, includes test (#2731) 2024-10-21 11:35:37 +02:00
Emmanuel Ojighoro
b0ff9d161e UI - Fix mobile styling inconsistencies and resolve diff page overflow issue (#2716) 2024-10-21 11:34:22 +02:00
dgtlmoon
c1dd681643 Filters - "Block change detection when text exists" should not trigger a change when the original text returns 2024-10-14 12:57:02 +02:00
dgtlmoon
ecafa27833 UI - More work on tab buttons hiding behind menu/header :-) 2024-10-11 22:54:09 +02:00
dgtlmoon
f7d4e58613 0.47.03 2024-10-11 17:33:00 +02:00
dgtlmoon
5bb47e47db Remove same checksum skip check - saved a little CPU but added a lot of complexity (#2700) 2024-10-11 17:28:42 +02:00
dgtlmoon
03151da68e UI - Fix scroll offset / tab buttons hiding behind menu/header 2024-10-11 16:04:08 +02:00
dgtlmoon
a16a70229d 0.47.01 2024-10-11 15:02:17 +02:00
dgtlmoon
9476c1076b Adding missing apprise_plugin for pypi/pip based installs 2024-10-11 15:01:27 +02:00
dgtlmoon
a4959b5971 0.47.00 2024-10-11 13:04:56 +02:00
dgtlmoon
a278fa22f2 Restock multiprice improvements (#2698) 2024-10-11 11:43:35 +02:00
dgtlmoon
d39530b261 Test - Simple test for live preview 2024-10-11 11:07:12 +02:00
dgtlmoon
d4b4355ff5 Adding test for proxy checker/scanner (#2697) 2024-10-11 09:52:55 +02:00
dgtlmoon
c1c8de3104 Fixing proxy checker (#2696) 2024-10-11 00:19:19 +02:00
dgtlmoon
5a768d7db3 UTF-8 handling fixes, Improvements to whitespace filtering (#2691) 2024-10-10 14:59:39 +02:00
dgtlmoon
f38429ec93 Testing - Tidyup (#2693) 2024-10-10 12:45:23 +02:00
dgtlmoon
783926962d Filters & Text - Preview refactor/improvements (#2689) 2024-10-09 09:17:32 +02:00
Marc
6cd1d50a4f Build - Add image source label to Dockerfile (Better Renovate and others support) (#2690) 2024-10-09 08:30:23 +02:00
dgtlmoon
54a4970a4c Custom JSON/POST Notifications - Log when it could not apply the application/json content-type header 2024-10-08 09:48:38 +02:00
dgtlmoon
fd00453e6d UI - Filters live preview - improvements to layout 2024-10-08 08:59:10 +02:00
dgtlmoon
2842ffb205 Restock - Use the scraped 'Not in stock' product status over the metadata version (many website lie in the metadata) (#2684) 2024-10-07 20:10:35 +02:00
dgtlmoon
ec4e2f5649 UI - Better 40x error message (#2685) 2024-10-07 16:52:19 +02:00
dgtlmoon
fe8e3d1cb1 Visual Selector - Including <button> (#2686) 2024-10-07 16:52:04 +02:00
dgtlmoon
69fbafbdb7 Stock/not-in-stock scraper - slight reliability improvement (#2687) 2024-10-07 16:51:47 +02:00
dgtlmoon
f255165571 Code - Small improvements in logging 2024-10-07 16:01:55 +02:00
Emmanuel Ojighoro
7ff34baa90 UI - CSS - Fix on sorting row wrapping issue (#2680) 2024-10-07 09:02:21 +02:00
dgtlmoon
043378d09c UI - Live filters preview - Better handling of watch preferences 2024-10-05 19:40:36 +02:00
dgtlmoon
af4bafcff8 UI - "Diff" button in overview list is now "History" button (#2679) 2024-10-05 17:24:29 +02:00
dgtlmoon
b656338c63 UI - Improve error handling when a module is missing when editing a URL/link (#2678) 2024-10-05 16:58:40 +02:00
dgtlmoon
97af190910 UI - Live filters preview - Make it sticky in the viewport so its easier to build nice filters 2024-10-05 16:53:45 +02:00
dgtlmoon
e9e063e18e UI - Live filters preview - dark mode improvements 2024-10-05 16:51:33 +02:00
dgtlmoon
45c444d0db UI - Improvements to text preview on mobile 2024-10-05 16:47:00 +02:00
dgtlmoon
00458b95c4 UI - Improvements to live preview of Filters text
"Ignore text" is now "Remove text", it works the same but it removes the text instead of ignoring it, which is the same thing, but makes the code simpler
2024-10-05 16:32:28 +02:00
Emmanuel Ojighoro
dad9760832 UI - Misc fixes for mobile styling (#2669) 2024-10-05 16:11:53 +02:00
dgtlmoon
e2c2a76cb2 Update docker-compose.yml - Adding example for enabling change detection on local files 2024-10-05 15:33:05 +02:00
dgtlmoon
5b34aece96 UI - Live preview - misc improvements (Adding test, fixes to filters) (#2663) 2024-09-30 13:54:35 +02:00
dgtlmoon
1b625dc18a UI - "Filters & Triggers" - Live preview of text filters (Preview the output of the filters section in realtime) (#2612) 2024-09-28 10:40:47 +02:00
dgtlmoon
367afc81e9 Reversing subprocess execution - saved a little memory but used a LOT more CPU (#2659) 2024-09-27 21:36:02 +02:00
dgtlmoon
ddfbef6db3 [test] Use local data instead of reaching out to changedetection when testing (#2660) 2024-09-27 20:30:19 +02:00
dgtlmoon
e173954cdd Restock monitor - Only try to process restock information (like scraping for "out of stock" keywords) if the page was actually rendered correctly. (#2645) 2024-09-20 09:19:57 +02:00
dgtlmoon
e830fb2320 Text filters - Adding filters "Trim whitespace" and "Remove duplicate lines" 2024-09-18 15:45:44 +02:00
dgtlmoon
c6589ee1b4 Browser Steps - UI - Use a better flexbox layout 2024-09-18 11:26:10 +02:00
Michael McMillan
dc936a2e8a Filters - Add support for also removing HTML elements using XPath selectors (#2632) 2024-09-17 22:43:04 +02:00
dgtlmoon
8c1527c1ad Update AppRise notification library to 1.9.0 (#2624) 2024-09-17 19:06:17 +02:00
Dawid Wróbel
a5ff1cd1d7 browser_steps: add "click element containing text if exists" (#2629) 2024-09-17 18:30:54 +02:00
dgtlmoon
543cb205d2 Testing - Fixing Restock test #2641 2024-09-17 18:29:12 +02:00
dgtlmoon
273adfa0a4 Testing - Fix false filter missing check alerts 2024-09-17 16:55:04 +02:00
Felipe Tuffani
8ecfd17973 Restock/Price detection - Fix duplicated prices with different data type on single page product #2636 (#2638) 2024-09-17 11:22:54 +02:00
dgtlmoon
19f3851c9d Memory management improvements - LXML and other libraries can leak allocation, wrap in a sub-process (#2626) 2024-09-11 16:20:49 +02:00
dgtlmoon
7f2fa20318 Small memory allocation fixes (#2625) 2024-09-11 14:51:32 +02:00
dgtlmoon
e16814e40b Testing - locale fix for test (#2623) 2024-09-11 11:31:07 +02:00
dgtlmoon
337fcab3f1 Testing/Code - Improving test reliability (#2617) 2024-09-09 16:50:00 +02:00
dgtlmoon
eaccd6026c UI - Hiding noisy info under 'show advanced help' button (#2609) 2024-09-06 14:33:06 +02:00
dgtlmoon
5b70625eaa 0.46.04 2024-09-04 13:55:18 +02:00
dgtlmoon
60d292107d Fixing restock monitor tests and tweaking docker default config example, 2024-09-02 15:11:31 +02:00
dgtlmoon
1cb38347da Container name should be 'sockpuppetbrowser' because its not just playwright that uses it 2024-09-02 13:21:38 +02:00
dgtlmoon
55fe2abf42 Restock/Price detection - Better catching of errors when parsing metadata documents for restock/price check (#2602) 2024-09-01 13:07:06 +02:00
dgtlmoon
4225900ec3 Restock - updating texts and text offsets 2024-09-01 12:47:21 +02:00
dgtlmoon
1fb4342488 Build - Unpin jsonschema for faster builds (#2583) 2024-08-22 15:02:00 +02:00
dgtlmoon
7071df061a Price detection/scraping - Adding extra element training data (#2582) 2024-08-22 15:01:36 +02:00
dgtlmoon
6dd1fa2b88 0.46.03 2024-08-19 17:22:13 +02:00
dgtlmoon
371f85d544 Watch 'Download last snapshot' link/button should give last, not first snapshot (#2576) 2024-08-19 17:20:30 +02:00
dgtlmoon
932cf15e1e Price and restock scraping - small price fix scraper (#2575) 2024-08-19 15:47:19 +02:00
Mike Splain
bf0d410d32 Browser Steps UI - Interactive UI wasn't sending headers but was when the check ran (#2551) 2024-08-19 10:21:05 +02:00
dgtlmoon
730f37c7ba Set encoding type for scraper script reader (#2574 #2568) 2024-08-19 09:17:18 +02:00
dgtlmoon
8a35d62e02 Handle zero-byte/empty content responses with "[ ] Empty pages are a change" option, the same as when the HTML doesnt render any useful text (#2530) 2024-07-29 13:27:59 +02:00
dgtlmoon
f527744024 0.46.02 2024-07-27 20:28:04 +02:00
dgtlmoon
71c9b1273c Adding test for #1995 UTF-8 encoding in POST request body and post:// notifications (#2525) 2024-07-27 19:47:03 +02:00
dgtlmoon
ec68450df1 Updating Apprise notification library , Splunk/VictorOps, Africas Talking, Microsoft Power Automate / Workflows, Société Française du Radiotéléphone (SFR) Support (#2524) 2024-07-27 14:28:57 +02:00
dgtlmoon
2fd762a783 Encode POST style requests and notifications as UTF-8 if it has no encoding/basic string (#2523) 2024-07-27 14:27:15 +02:00
Kenny Root
d7e85ffe8f VisualSelector+BrowserSteps - When scraping elements, check for null results (#2517) 2024-07-25 14:44:10 +02:00
Kenny Root
d23a301826 Use #!/usr/bin/env to support virtualenv (#2518) 2024-07-25 14:39:01 +02:00
dgtlmoon
3ce6096fdb Update README.md 2024-07-21 18:18:24 +02:00
dgtlmoon
8acdcdd861 UI - Adding "Download latest HTML snapshot" from Edit Watch > Stats page for easier debugging (#2513) 2024-07-21 18:17:34 +02:00
dgtlmoon
755cba33de 0.46.01 2024-07-19 13:53:00 +02:00
dgtlmoon
8aae7dfae0 UI - Fixing up 'test notification' bug from main settings and tag settings pages #2510 (#2511) 2024-07-19 13:51:15 +02:00
dgtlmoon
ed00f67a80 0.46.00 2024-07-18 14:13:06 +02:00
dgtlmoon
44e7e142f8 Restock/Price detection - Improving text information snapshot value 2024-07-18 13:28:54 +02:00
dgtlmoon
fe704e05a3 Restock - Tweaking storage of "original price" 2024-07-18 13:15:56 +02:00
dgtlmoon
e756e0af5e Fixing file:// file pickup - for change detection of local files (#2505) 2024-07-18 13:05:27 +02:00
dgtlmoon
c0b6c8581e Adding Apple M1 Pro type arm64/v8 support docker image (#2507) 2024-07-18 11:58:34 +02:00
dgtlmoon
de558f208f Dropping older ARM v6 support due to dependencies not having support (#2506) 2024-07-18 10:54:55 +02:00
dgtlmoon
321426dea2 Ability to use restock and price amounts in notifications as tokens (for example {{restock.price}} ) (#2503) 2024-07-17 20:27:47 +02:00
dgtlmoon
bde27c8a8f Restock & Price detection - Ability to set up a tag/group that applies to all watches with price + restock limits 2024-07-16 17:23:39 +02:00
dgtlmoon
1405e962f0 Fixing problematic tag assigning via UI which caused watches to not accept new settings 2024-07-16 17:09:13 +02:00
dgtlmoon
a9f10946f4 Fixing first history/preview save issue (First version after an error, on the first check, wasnt available) (#2494) 2024-07-15 12:30:06 +02:00
dgtlmoon
6f2186b442 UI - Restock/price following text cleanups 2024-07-14 07:50:52 +02:00
dgtlmoon
cf0ff26275 UI - Extract <title> as title should work on all processors (#2490) 2024-07-12 19:42:18 +02:00
dgtlmoon
cffb6d748c Restock & Price monitor - Huge refactor, set upper and lower price alert limits, set % change, follow the prices and restock amounts directly in the watch-overview list 2024-07-12 17:09:42 +02:00
dgtlmoon
99b0935b42 Product checks - Just a basic string check is far more efficient for suggestion price/restock check plugin (#2488) 2024-07-12 14:46:36 +02:00
dgtlmoon
f1853b0ce7 Update COMMERCIAL_LICENCE.md 2024-07-12 13:04:54 +02:00
dgtlmoon
c331612a22 Update README.md - Adding link to COMMERCIAL_LICENCE.md for those interested in reselling the software 2024-07-12 12:48:54 +02:00
dgtlmoon
445bb0dde3 Adding COMMERCIAL_LICENCE.md 2024-07-12 12:46:12 +02:00
dgtlmoon
8f3a6a42bc Testing - Adding simple memory usage test (#2483) 2024-07-11 15:03:42 +02:00
dgtlmoon
732ae1d935 Bugfix - Watches with BrowserSteps should recreate the data-dir if it was missing (in the case that you deleted/migrated) (#2484) 2024-07-11 15:03:30 +02:00
dgtlmoon
5437144dff 0.45.26 2024-07-11 13:52:13 +02:00
dgtlmoon
ed38012c6e Code - Fixing deprecation warning (#2477) 2024-07-09 17:38:17 +02:00
dgtlmoon
f07ff9b55e UI - Visual Selector should still update when elements were not found (#2476) 2024-07-09 15:35:19 +02:00
Nectariferous
1c46914992 Code - Update/modernise diff.py (#2471) 2024-07-09 15:08:13 +02:00
dgtlmoon
e9c4037178 UI - Visual Selector - Multiple selections (refactor) (#2475) 2024-07-09 15:07:23 +02:00
dgtlmoon
1af342ef64 UI - Visual Selector now supports Shift+Click for multiple selections! 2024-07-05 20:43:26 +02:00
dgtlmoon
e09ee7da97 UI - Visual Selector - Show/visualise all/any matching filter elements from all filters in "CSS/JSONPath/JQ/XPath Filters" include filters (#2440) 2024-07-05 15:20:39 +02:00
dgtlmoon
09bc24ff34 UI - Visual Selector graphics should be centred 2024-07-05 14:33:36 +02:00
dgtlmoon
a1d04bb37f Snapshot count from history was not updated in watch after using [clear history] (#2459) 2024-07-05 11:09:31 +02:00
dgtlmoon
01f910f840 Fixing 'tags'' field from old installs (0.43.0+) could have wrong data-type causing crash 2024-07-04 15:23:06 +02:00
dgtlmoon
bed16009bb 0.45.25 2024-07-03 19:27:23 +02:00
dgtlmoon
faeed78ffb UI - Fixing preview/diff "ignore text" highlight button (refactor, didnt work in "preview" mode) (#2455) 2024-07-03 19:26:33 +02:00
dgtlmoon
5d9081ccb2 Restock detection - Updating detection texts 2024-07-03 18:45:36 +02:00
dgtlmoon
2cf1829073 UI - Mobile - Hiding empty columns 2024-07-03 17:13:31 +02:00
dgtlmoon
526551a205 UI - Mobile - Watch overview table - Sort/order buttons were not being shown correctly 2024-06-30 18:05:13 +02:00
dgtlmoon
ba139e7f3f Update docker-compose.yml - fix indentation re #2447 2024-06-28 23:08:37 +02:00
Max Michels
13e343f9da Restock detection - Added extra out-of-stock phrases for DE (#2442) 2024-06-26 11:03:00 +02:00
dgtlmoon
13be4623db Restock detection - updating texts 2024-06-25 13:23:43 +02:00
dgtlmoon
3b19e3d2bf UI - Fixing double punctuation in 'unpaused' message #2435 2024-06-24 09:15:48 +02:00
dependabot[bot]
ce42f8ea26 Build - Bump docker/build-push-action from 5 to 6 in the all group (#2436) 2024-06-24 08:50:02 +02:00
dgtlmoon
343e359b39 Now saving last two HTML snapshots for future reference, refactor, dont write screenshots and xpath to disk when no change detected (saves disk IO) (#2431) 2024-06-23 09:19:32 +02:00
Hritik Vijay
ffd160ce0e Filters - Implement jqraw: filter (use this to output nicer JSON format when selecting/filtering by JSON) (#2430) 2024-06-21 13:31:03 +02:00
dgtlmoon
d31fc860cc Build - fixing build warnings 2024-06-20 15:07:17 +02:00
dgtlmoon
90b357f457 Upgrade to Python 3.11 from 3.10, add faster prebuilt "wheels" for rPi devices, upgrade cryptography security library 2024-06-20 14:42:17 +02:00
dgtlmoon
cc147be76e Prefer pythons built in "importlib" over pkg_resources+setuptools (#2424) 2024-06-18 09:08:48 +02:00
dependabot[bot]
8ae5ed76ce Security/dependabot - Bump urllib3 from 1.26.18 to 1.26.19 (#2423) 2024-06-18 08:23:12 +02:00
dgtlmoon
a9ed113369 0.45.24 2024-06-17 13:27:11 +02:00
dgtlmoon
eacf920b9a Update eventlet ( Fixes SSL error on Python 3.12 ) (#2419) 2024-06-17 12:05:20 +02:00
dgtlmoon
c9af9b6374 Filter failure/not found notification threshold - Counter should be reset when editing a watch, clear watch errors on 'save' (#2413) 2024-06-17 11:42:41 +02:00
dependabot[bot]
5e65fb606b Bump dnspython from 2.3.0 to 2.6.1 (#2306) 2024-06-17 10:23:50 +02:00
dgtlmoon
434a1b242e Improve testing for Python 3.10, 3.11 and 3.12 2024-06-17 09:46:54 +02:00
dgtlmoon
bce02f9c82 UI - Add space around checkbox operation buttons so they work better in mobile #2393 2024-06-13 13:22:31 +02:00
dgtlmoon
76ffc3e891 RSS - Setting to hide muted watches in RSS feed (default ON) (#2411) 2024-06-13 11:52:12 +02:00
dgtlmoon
c6ee6687b5 Fetching/Requests - Fixing user agent header overrides per-watch of global settings (#2409) 2024-06-13 10:50:46 +02:00
dgtlmoon
de48892243 Code - improving unique key fix for history database handler (#2402)
* improving unique key fix

* also bump timestamp along 1 sec
2024-06-06 22:59:04 +02:00
dgtlmoon
6aded50aca UI - 'Mark all viewed' button should not show when all viewed (#2399) 2024-06-05 11:46:04 +02:00
dgtlmoon
b8e279a025 Fixing build test - Adding small delay (#2397) 2024-06-04 13:56:35 +02:00
dependabot[bot]
8041d00e75 Code - Bump eventlet from 0.33.3 to 0.35.2 (#2305) 2024-06-03 17:25:14 +02:00
dgtlmoon
6a0e14cfce UI - Mobile CSS/layout fix wrapping on empty list text #2393 2024-05-30 14:26:51 +02:00
dgtlmoon
be91c5425c UI - Preview single snapshot - Date and button fixes (#2389) 2024-05-27 12:51:01 +02:00
dgtlmoon
778680d517 Build - PIL/pillow package not used, probably shouldnt be installed/required (#2382) 2024-05-27 10:17:19 +02:00
dgtlmoon
7e8aa7e3ff 0.45.23 2024-05-22 00:01:50 +02:00
Alexander Sulfrian
d77f913aa0 RSS - Only insert feed header if app_rss_token is set (should be only shown in index/overview page) (#2381) 2024-05-21 23:47:35 +02:00
dgtlmoon
59cefe58e7 Fetcher - Using pyppeteerstealth with puppeteer fetcher (#2203) 2024-05-21 17:03:50 +02:00
dgtlmoon
cfc689e046 Fix overflowing text 2024-05-21 12:13:23 +02:00
Alexander Sulfrian
7b04b52e45 RSS and tags/groups - Fixes use active_tag_uuid, fixes broken RSS link in page html (#2379) 2024-05-20 15:49:12 +02:00
dgtlmoon
f49eb4567f Ability to set default User-Agent for either fetching types directly in the UI (#2375) 2024-05-20 15:11:15 +02:00
dgtlmoon
a8959be348 Testing - Fixing JSON test 2024-05-20 14:14:40 +02:00
dgtlmoon
05bf3c9a5c UI - Mobile - quick watch form element fixes 2024-05-20 13:43:59 +02:00
dgtlmoon
4293639f51 UI - CSS - Remove gradient border, it did not add much to the design #2377 2024-05-20 13:41:23 +02:00
dgtlmoon
f0ed4f64e8 RSS - Muted watches should not show in RSS feed (#2374 #2304) 2024-05-17 17:47:35 +02:00
dgtlmoon
add2c658b4 Notifications - Fixing truncated notifications when tgram:// or discord:// is used with other notification methods (#2372 #2299) 2024-05-17 09:20:26 +02:00
dgtlmoon
e27f66eb73 UI - Ability to preview/view single changes by timestamp using keyboard or select box(#1916) 2024-05-16 23:39:06 +02:00
dgtlmoon
e4504fee49 Browser Steps - Fixing "goto site" step #2330 #2337 (#2364) 2024-05-15 10:49:30 +02:00
dgtlmoon
5798581f18 Crash on older CPU - Setting LXML version to any version without the known modern-CPU-only CPU flags (#2365 #2328 ) 2024-05-15 10:17:51 +02:00
dgtlmoon
ef910b86ef Notifications - Update Apprise notification library to 1.8.0 (#2363 #2324) fixes mailto:// with IP as server endpoint 2024-05-14 14:01:13 +02:00
dgtlmoon
8d1fb96d18 UI - Refactor of the Recheck Time Settings, Added "Use default recheck time" checkbox and refactor/simplify system handling (#2362) 2024-05-14 13:51:03 +02:00
dgtlmoon
5df5d0fbe7 UI - Search should scan/search error messages (#2353) 2024-05-10 18:20:49 +02:00
dgtlmoon
815cba11ca UI - 'stats' tab should show what the server-type detected is ( #2348 ) 2024-05-07 15:23:42 +02:00
dgtlmoon
3aed4e5af9 Update README.md 2024-05-05 18:21:10 +02:00
dgtlmoon
3618c389c6 Notifications - Setting set minimum version for mqtt:// library notifications (#2334 / #2333) 2024-05-02 16:51:56 +02:00
dgtlmoon
d127214d8f 0.45.22 2024-05-02 12:09:45 +02:00
dgtlmoon
c0f000b1d1 Merge pull request from GHSA-pwgc-w4x9-gw67
* Auto-escape was not enabled GHSA-pwgc-w4x9-gw67

* Auto-escape was not enabled because the filenames were not something jinja2 enables it for.
2024-05-02 11:46:31 +02:00
dgtlmoon
ee5294740a 0.45.21 2024-04-25 22:29:38 +02:00
dgtlmoon
bd6eda696c Merge pull request from GHSA-4r7v-whpg-8rx3
* CVE-2024-32651 - Security fix - Server Side Template Injection in Jinja2 allows Remote Command Execution

* use ImmutableSandboxedEnvironment also in validation
2024-04-25 22:06:09 +02:00
dgtlmoon
1ba29655f5 UI - Wrap tag names in solid background to make it easier to read when theres multiple tags 2024-04-20 20:34:52 +02:00
dgtlmoon
830a0a3a82 UI - Error text on exception should contain the word Exception (#2322) 2024-04-19 08:54:25 +02:00
dgtlmoon
e110b3ee93 0.45.20 2024-04-18 11:55:46 +02:00
dgtlmoon
3ae9bfa6f9 Bug fix - further work on lxml filter extract (#2313 #2312 #2317) 2024-04-18 11:53:45 +02:00
dgtlmoon
6f3c3b7dfb 0.45.19 2024-04-17 20:01:35 +02:00
dgtlmoon
74707909f1 Bug fix for newer lxml module - module 'lxml.etree' has no attribute '_ElementStringResult' - reimplement _ElementStringResult (#2313 #2312) 2024-04-17 19:55:45 +02:00
dgtlmoon
d4dac23ba1 0.45.18 2024-04-16 18:50:14 +02:00
dgtlmoon
f9954f93f3 UI - Adding UI notice if watch has group options set (#2311 #2307) 2024-04-16 18:48:51 +02:00
dgtlmoon
1a43b112dc dependabot - automatically follow apprise 2024-04-15 11:17:50 +02:00
dgtlmoon
db59bf73e1 "Send Test Notification" - In "Group" settings form it should not fallback to the system wide notifications when sending a test if nothing is set. 2024-04-03 17:10:13 +02:00
dgtlmoon
8aac7bccbe "Send Test Notification" - Now provides better feedback and works with the actual values in system settings form 2024-04-03 16:52:42 +02:00
dgtlmoon
9449c59fbb Code - Getting ready for newer python versions - packing our own strtobool (#2291) 2024-04-03 16:17:15 +02:00
dgtlmoon
21f4ba2208 UI - BrowserSteps - Show step screenshot/pic should use absolute URL #2243 2024-04-03 16:15:33 +02:00
dgtlmoon
daef1cd036 UI - Remove unique check for URLs entered on the "quick watch add" form ( #2286 #2292 ) 2024-04-03 16:08:33 +02:00
dgtlmoon
56b365df40 UI - Improvements to tag/groups page, show number of watches under each group, link group name to list (#2290) 2024-04-03 16:01:24 +02:00
dgtlmoon
8e5bf91965 "Send Test Notification" button from watch form edit should respect global settings and tag/group settings ( #2289, #2263 ) 2024-04-03 15:18:21 +02:00
dgtlmoon
1ae59551be 0.45.17 2024-03-31 16:35:44 +02:00
dgtlmoon
a176468fb8 UI - Add helper note 2024-03-31 16:35:09 +02:00
dgtlmoon
8fac593201 UI Text - Adding helper text to VisualSelector to explain what the connection is with the CSS/xPath filters 2024-03-26 14:58:36 +01:00
Andrew
e3b8c0f5af Update contributing documentation for discontinuation of dev branch (#2272) 2024-03-22 18:39:43 +01:00
dgtlmoon
514fd7f91e Updating pyppeteer-ng (mainly newer pillow release) (#2247) 2024-03-18 14:00:05 +01:00
dgtlmoon
38c4768b92 Notifications - Updating apprise version, pinning mqtt:// to compatible version (#2242) 2024-03-10 21:05:23 +01:00
dgtlmoon
6555d99044 0.45.16 2024-03-08 21:07:08 +01:00
dgtlmoon
e719dbd19b Pip build - content fetchers package was missing 2024-03-08 21:06:22 +01:00
dgtlmoon
b28a8316cc 0.45.15 2024-03-08 19:00:37 +01:00
dgtlmoon
e609a2d048 Updating restock detection texts 2024-03-08 15:58:40 +01:00
dgtlmoon
994d34c776 Adding CORS module - Solves Chrome extension API connectivity (#2236) 2024-03-08 13:30:31 +01:00
dgtlmoon
de776800e9 UI - Overview list shortcut button - Ability to reset any previous errors 2024-03-06 19:16:13 +01:00
dgtlmoon
8b8ed58f20 Chrome Extension - Adding link and install information from the API page 2024-03-06 15:21:03 +01:00
dgtlmoon
79c6d765de Chrome Extension - Adding link in README.md to the webstore 2024-03-06 11:15:27 +01:00
dgtlmoon
c6db7fc90e Chrome Extension - Adding callout to UI 2024-03-06 11:06:30 +01:00
pedrogius
bc587efae2 Import - Fixed "Include filters" option (fixed typo on select) (#2232) 2024-03-05 10:45:32 +01:00
dgtlmoon
6ee6be1a5f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2024-02-28 17:11:01 +01:00
dgtlmoon
c83485094b Updating restock detection texts 2024-02-28 17:10:41 +01:00
dgtlmoon
387ce32e6f Restock detection - Improving test for restock IN STOCK -> OUT OF STOCK (#2219) 2024-02-28 10:05:52 +01:00
dgtlmoon
6b9a788d75 Puppeteer - remove debug hook 2024-02-26 18:15:23 +01:00
dgtlmoon
14e632bc19 Custom headers fix in Browser Steps and Playwright/Puppeteer fetchers ( #2197 ) 2024-02-26 18:02:45 +01:00
Constantin Hong
52c895b2e8 text_json_diff/fix: Keep an order of filter and remove duplicated filters. 2 (#2178) 2024-02-21 11:46:23 +01:00
dgtlmoon
a62043e086 Fetching - restock detecting and visual selector scraper - Fixes scraping of elements that are not visible 2024-02-21 11:21:43 +01:00
dgtlmoon
3d390b6ea4 BrowserSteps UI - Avoid selecting very large elements that are likely to be the page wrapper 2024-02-21 11:00:35 +01:00
dgtlmoon
301a40ca34 Fetching - Puppeteer - Adding more debug/diagnostic information 2024-02-21 10:55:18 +01:00
dgtlmoon
1c099cdba6 Update stock-not-in-stock.js 2024-02-21 10:28:59 +01:00
dgtlmoon
af747e6e3f UI - Sorted alphabetical tag list and list of tags in groups setting (#2205) 2024-02-21 10:03:09 +01:00
dgtlmoon
aefad0bdf6 Code - Remove whitespaces in visual selector elements config 2024-02-21 09:37:35 +01:00
dgtlmoon
904ef84f82 Build fix - Pinning package versions and Custom browser endpoints should not have a proxy set (#2204) 2024-02-20 22:11:17 +01:00
dgtlmoon
d2569ba715 Update stock-not-in-stock.js 2024-02-20 20:00:31 +01:00
dgtlmoon
ccb42bcb12 Fetching pages - Custom browser endpoints should not have default proxy info added 2024-02-12 19:05:10 +01:00
dgtlmoon
4163030805 Puppeteer - fixing wait times 2024-02-12 13:02:24 +01:00
dgtlmoon
140d375ad0 Puppeteer - more improvements to proxy and authentication 2024-02-12 12:54:11 +01:00
dgtlmoon
1a608d0ae6 Puppeteer - client fixes for proxy and caching (#2181) 2024-02-12 12:40:31 +01:00
dependabot[bot]
e6ed91cfe3 dependabot - Bump the all group with 1 update (artifact store) (#2180) 2024-02-12 12:40:03 +01:00
dgtlmoon
008272cd77 Puppeteer fetch - fixing exception names 2024-02-11 11:18:36 +01:00
dgtlmoon
823a0c99f4 Code - Split content fetcher code up (playwright, puppeteer and requests), fix puppeteer direct chrome support (#2169) 2024-02-11 00:09:12 +01:00
dgtlmoon
1f57d9d0b6 Alpine linux build - adding JPEG development headers to fix build errors 2024-02-10 20:57:04 +01:00
dgtlmoon
3287283065 Plawright content fetcher - Fixes for status codes and screenshot info (#2168) 2024-02-08 15:15:04 +01:00
dgtlmoon
c5a4e0aaa3 Fetching - Prefer to use SockPuppetBrowser (#2163) 2024-02-07 20:58:21 +01:00
dgtlmoon
5119efe4fb 0.45.14 2024-02-07 12:43:23 +01:00
dgtlmoon
78a2dceb81 Bug fix - fix missing default var (#2162/ #2118/ #2122 ) 2024-02-06 17:25:44 +01:00
dgtlmoon
72c7645f60 Fix - Pinning elementpath xPath filter library to 4.1.5 (#2164) 2024-02-06 14:49:10 +01:00
dgtlmoon
e09eb47fb7 Restock detection - Update stock-not-in-stock.js (NL) 2024-02-03 22:32:39 +01:00
dgtlmoon
616c0b3f65 New text filter - Sort text alphabetically filter (#2153) 2024-02-02 11:36:58 +01:00
dgtlmoon
c90b27823a Filtering - include_filters in group and watch settings should not duplicate (#2151 #1845) 2024-02-02 09:30:01 +01:00
dgtlmoon
3b16b19a94 Record notification count and show in [stats] tab (#2150) 2024-02-02 09:12:44 +01:00
Antonio Neri
4ee9fa79e1 Restock - Update stock-not-in-stock.js Italian translation (#2149)
Added `prodotto esaurito` - Italian for out of stock
2024-02-01 16:35:31 +01:00
dgtlmoon
4b49759113 UI - Show error/warning when trying to compare the same version 2024-02-01 10:36:43 +01:00
dgtlmoon
e9a9790cb0 Fetching - Make an obvious error when using BrowserSteps with the simple text fetcher (#2145) 2024-02-01 00:09:27 +01:00
dgtlmoon
593660e2f6 Fix for switching to price-data-follower mode (when page has JSON price data), only needs to be queued once. Re #1565 2024-01-31 22:39:24 +01:00
dgtlmoon
7d96b4ba83 Fetching - Always record server software reply headers (will be used in the future) (#2143) 2024-01-31 16:15:43 +01:00
dgtlmoon
fca40e4d5b Testing - General test workflow improvements (#2144) 2024-01-31 15:10:44 +01:00
dgtlmoon
66e2dfcead RSS - Include link to the watched URL in the feed (#2139 #2131 and #327) 2024-01-29 16:26:14 +01:00
dgtlmoon
bce7eb68fb Notifications - skip empty notification URLs from being processed (#2138) 2024-01-29 14:20:39 +01:00
dgtlmoon
93c0385119 UI - Filters & Triggers - Adding example for keyword matching in a line 2024-01-29 14:18:14 +01:00
dgtlmoon
e17f3be739 RSS - Adding performance stats 2024-01-29 13:05:11 +01:00
dgtlmoon
3a9f79b756 Notification - logging - adding performance information for processing time of notifications #327 2024-01-29 12:55:08 +01:00
dgtlmoon
1f5670253e UI - Adding icon to show which watch has Browser Steps enabled (#2137) 2024-01-29 12:36:53 +01:00
dgtlmoon
fe3cf5ffd2 Logging - Adding extra debug logging to change detection (#2136) 2024-01-29 11:21:21 +01:00
dgtlmoon
d31a45d49a Fetcher - Improve status_code logging (#2130 #2122) 2024-01-25 09:53:00 +01:00
Conner
19ee65361d Notifications - Bugfix: Notification format not being set correct (HTML emails being sent as plaintext and other problems) (#2129) 2024-01-24 20:45:45 +01:00
dgtlmoon
677082723c Restock tweaks - use a single regex, tidy up height detection (#2125) 2024-01-23 13:31:05 +01:00
dgtlmoon
96793890f8 Notification - Templates - Adding an example of how to use URL encoding with tokens 2024-01-22 12:20:50 +01:00
dgtlmoon
0439155127 Notification - Templates - Adding an example of how to use |tojson for JSON payloads 2024-01-22 11:06:55 +01:00
dependabot[bot]
29ca2521eb Build maintenance - dependabot - Bump the all build helpers (#2121) 2024-01-20 11:47:14 +01:00
Andrew Peabody
7d67ad057c Enable dependabot for github-actions (#2119) 2024-01-20 10:03:24 +01:00
dgtlmoon
2e88872b7e Update docker-compose.yml 2024-01-19 23:14:55 +01:00
dgtlmoon
b30b718373 0.45.13 2024-01-19 10:24:47 +01:00
dgtlmoon
402f1e47e7 Security update - Adding API token secure check for API endpoint /api/v1/watch/<uuid>/history @rozpuszczalny 2024-01-18 23:19:09 +01:00
dgtlmoon
9510345e01 Test - tidy up backup test (#2117) 2024-01-17 22:35:29 +01:00
dgtlmoon
36085d8cf4 Adding contributors section (#2116) 2024-01-17 18:47:24 +01:00
dgtlmoon
399cdf0fbf Logging loguru output tweaks (#2112) 2024-01-16 11:27:47 +01:00
Constantin Hong
4be0fafa93 Support Loguru as a logger (#2036) 2024-01-16 09:48:16 +01:00
dgtlmoon
51ce7ac66e Update stock-not-in-stock.js texts 2024-01-15 10:20:04 +01:00
dgtlmoon
c3d825f38c Test - Adding extra test for HTML output in emails ( #2103 ) 2024-01-14 17:47:54 +01:00
dgtlmoon
fc3c4b804d Update README.md 2024-01-14 13:40:54 +01:00
dgtlmoon
1749c07750 Restock detection - Check all elements for text to get stock status from, only consider elements inside the viewport, only consider elements more than 100px from the top (avoid menu) , trim any text returned (#2040) 2024-01-12 23:11:56 +01:00
dgtlmoon
65428655b8 Notifications - When any in a list of notifications fails, the others should still work (#2106) 2024-01-12 12:25:21 +01:00
dgtlmoon
8be0029260 Browser Steps - Fixing "'Response' object is not subscriptable" where quotes were used in connection URL - Quote wrapped URL for browserstep url was breaking the connection #1627 #1823 #2099 (#2100) 2024-01-11 10:12:00 +01:00
kiyell
3c727ca54b Added OPTIONS HTTP method (#2094) 2024-01-08 23:32:44 +01:00
dgtlmoon
4596532090 API Docs - Examples should use port 5000 (same as the docker-compose default installation and other documentation) 2024-01-08 14:30:48 +01:00
dgtlmoon
d0a88d54a1 0.45.12 2024-01-05 20:17:14 +01:00
dgtlmoon
21ab4b16a0 0.45.11 2024-01-05 20:16:18 +01:00
dgtlmoon
77133de1cf Notification fixes - error on mailto:// when no format was specified, fixing default body and title of notifications to respect global settings (#2085) 2024-01-05 20:15:13 +01:00
dgtlmoon
0d92be348a Update README.md 2024-01-05 19:12:58 +01:00
dgtlmoon
3ac0c9346c Removing heroku support as its no longer free 2024-01-05 18:27:36 +01:00
dgtlmoon
b6d8db4c67 PyPi package build fixes (#2084) 2024-01-05 18:16:07 +01:00
dgtlmoon
436a66d465 Adding PyPi pip package publisher script 2024-01-05 17:27:41 +01:00
dgtlmoon
764514e5eb 0.45.10 2024-01-05 14:51:35 +01:00
dgtlmoon
ad3ffb6ccb Update README.md - Remove deprecated docker-compose (now docker compose) 2024-01-05 11:41:52 +01:00
dgtlmoon
e051b29bf2 Browser Steps - General error handling improvements (#2083) 2024-01-05 11:29:48 +01:00
Christian Arnold
126852b778 Browser Steps - Fix for correct tokens/information in browser step failure notification (#2066) 2024-01-05 11:15:22 +01:00
dgtlmoon
d115b2c858 UI - [Send test notification] - Refactor to use all tokens like a real watch and Notification Body+Title from UI value (#2079) 2024-01-04 17:02:31 +01:00
dgtlmoon
2db04e4211 Notifications upgrade - Upgrade to Apprise 1.7.1 - Emojis support, Telegram topics support, Discord support for user and role @ping support. (#2075) 2024-01-03 11:16:09 +01:00
dgtlmoon
946a556fb6 Restock detection - "In stock" should be None/"Not yet checked" by default (#2069) 2024-01-01 17:10:27 +01:00
dgtlmoon
eda23678aa Restock detection - updating texts 2024-01-01 16:43:34 +01:00
dgtlmoon
273bd45ad7 Fetching - Custom browser on experimental/puppeteer fetcher - Don't switch to custom puppeteer mode if external browser URL is active (#2068) 2024-01-01 16:40:24 +01:00
dgtlmoon
3d1e1025d2 0.45.9 2023-12-20 15:30:58 +01:00
dgtlmoon
5528b7c4b3 Restock detection - Update stock-not-in-stock.js strings (Dutch translations) 2023-12-20 15:28:43 +01:00
Constantin Hong
0dce3f4fec Testing: Improve application signal handling test coverage (#2052) 2023-12-19 11:10:51 +01:00
dgtlmoon
af4311a68c Update docker-compose.yml 2023-12-09 14:56:00 +01:00
dextouu
792fedb8bc Restock detection - Update stock-not-in-stock.js strings (#2032) 2023-12-05 09:14:40 +01:00
dgtlmoon
824748df9e API - Updating documentation 2023-12-03 10:24:54 +01:00
dgtlmoon
c086ec0d68 Update README.md 2023-12-02 15:34:51 +01:00
dgtlmoon
8e207ba438 API - Ability to add/import bulk list of watches as a line-feed separated list (#2021) 2023-12-01 18:38:49 +01:00
dgtlmoon
f0823126c8 Notifications - Fixing support for headers in custom post://, posts:// notifications, ability to include HTTP headers when making custom notifications (#2018) 2023-12-01 18:05:19 +01:00
dgtlmoon
98f56736c1 Improve handling of SIGTERM shutdown in containers, remove unnecessary multi-process handler for pip installs, tidy up modules (#2014) 2023-12-01 17:31:09 +01:00
dgtlmoon
872bd2de85 UI - Extra Browsers - Adding links and more resources on how to connect a fingerprint/scraping browser 2023-12-01 13:49:12 +01:00
dgtlmoon
e6de1dd135 0.45.8.1 2023-11-30 20:01:40 +01:00
dgtlmoon
599291645d PDF Fetcher for change detection - Always use plain requests for PDF because otherwise we cant access the embed PDF in the browser (#2020) 2023-11-30 20:01:14 +01:00
dgtlmoon
156d403552 UI - Fix - Edit Watch 'Show advanced options' should fire at page load to show you whats possible 2023-11-30 19:20:30 +01:00
dgtlmoon
7fe0ef7099 0.45.8 2023-11-29 10:25:11 +01:00
dgtlmoon
fe70beeaed Restock detector - adding more detection strings 2023-11-29 10:21:30 +01:00
dgtlmoon
abf7ed9085 UI - remove incorrect label 2023-11-29 10:19:49 +01:00
dgtlmoon
19e752e9ba UI - "Add new watch" URL at main input box should always grow to match the viewport 2023-11-28 18:11:11 +01:00
dgtlmoon
684e96f5f1 UI - Tidy-up for advanced settings under watch edit, HTML validation fixes (#2011) 2023-11-28 17:31:08 +01:00
dgtlmoon
8f321139fd UI - 'Request body' section disappears after switching from 'Playwright' to 'System settings default' and back on 'Request' tab - Fixed #1449 2023-11-28 14:01:15 +01:00
dgtlmoon
7fdae82e46 Browser Steps - Adding validation for "Click X,Y" step 2023-11-28 12:36:15 +01:00
dgtlmoon
bbc18d8e80 API - Make sure the watch "is viewed" attribute is correctly represented in the API output (#2009) 2023-11-28 11:42:08 +01:00
dgtlmoon
d8ee5472f1 Update playwright fetcher library and API calls 2023-11-28 11:20:06 +01:00
dgtlmoon
8fd57280b7 Testing - Improve PDF text change detection tests (#1992) 2023-11-20 15:18:46 +01:00
dgtlmoon
0285d00f13 UI - Clicking the "[Diff]" link should take you to the difference starting at the relative time to when you last viewed the difference page (#1989) 2023-11-17 17:21:26 +01:00
dgtlmoon
f7f98945a2 Visual Selector - xPath handling misc fixes (#1976) 2023-11-13 21:23:43 +01:00
dgtlmoon
5e2049c538 Fix build issue 2023-11-13 17:02:27 +01:00
Constantin Hong
26931e0167 feature: Support XPath2.0 to 3.1 (#1774) 2023-11-13 16:42:21 +01:00
dgtlmoon
5229094e44 New functionanlity - Selectable browser / ability to add extra browser connections (good for using "scraping browsers"/ etc) (#1943) 2023-11-13 16:39:11 +01:00
dgtlmoon
5a306aa78c API/UI - Button to regenerate API key (#1975 / #1967) 2023-11-13 16:26:50 +01:00
dgtlmoon
c8dcc072c8 Code refactor for fetchers (#1941) 2023-11-13 10:42:56 +01:00
dgtlmoon
7c97a5a403 0.45.7.3 2023-11-12 12:05:54 +01:00
dgtlmoon
7dd967be8e Build - update docker container cache setup 2023-11-12 10:06:46 +01:00
dgtlmoon
3607d15185 0.45.7.2 2023-11-12 00:29:35 +01:00
dgtlmoon
3382b4cb3f UI - Cleanup fonts better display in firefox, request CSS according to version (#1968) 2023-11-12 00:29:15 +01:00
Marcelo Alencar
5f030d3668 Revert "Build - Add piwheels support for ARMv6 and ARMv7 machines (rPi etc) (#1814)" (#1964) 2023-11-11 20:48:34 +01:00
dgtlmoon
06975d6d8f 0.45.7.1 2023-11-11 20:42:16 +01:00
dgtlmoon
f58e5b7f19 Build: python libraries - pinning more libraries (#1962) 2023-11-11 20:41:21 +01:00
dgtlmoon
e50eff8e35 Build: python libraries - eventlet + dnspython dep problems were fixed (#1963) 2023-11-11 20:16:09 +01:00
dgtlmoon
07a853ce59 Pip builder - ignore proxy test data if it exists 2023-11-10 17:50:09 +01:00
dgtlmoon
80f8d23309 0.45.7 2023-11-10 17:39:49 +01:00
dgtlmoon
9f41d15908 UI - Fixing issue where search box JS interfered with page render when logged out 2023-11-10 17:38:04 +01:00
dgtlmoon
89797dfe02 0.45.6 2023-11-10 17:32:21 +01:00
dgtlmoon
c905652780 UI - Adding support-us widget <3 (#1956) 2023-11-10 17:31:00 +01:00
dgtlmoon
99246d3e6d Visual Selector - Small fix, Improving elements fetcher reliability (#1947) 2023-11-09 19:13:18 +01:00
dgtlmoon
f9f69bf0dd Update README.md - Adding import information 2023-11-08 11:55:02 +01:00
dgtlmoon
68efb25e9b Upgrade playwright browser library (#1942) 2023-11-07 16:33:29 +01:00
dgtlmoon
70606ab05d Update docker-compose.yml - playwright version should be the same as in the automated tests 2023-11-06 22:33:22 +01:00
Jai Gupta
d3c8386874 Import - Improved Wachete Excel XLS import support for "dynamic wachet" (sets correct state of using chrome browser or not) column (#1934) 2023-11-06 12:50:27 +01:00
dgtlmoon
47103d7f3d Refactor Excel / wachete import, extend tests (#1931) 2023-11-03 15:43:57 +01:00
dgtlmoon
03c671bfff Build - Upgrading pip packages (#1915) 2023-11-01 18:47:12 +01:00
dgtlmoon
e209d9fba0 Ability to Import from Wachete XLSX (or any XLSX) - Wachete alternative made easy (#1921) 2023-11-01 15:36:49 +01:00
dgtlmoon
3b43da35ec Docker build - upgrade image to "bookworm" debian version - fix glibc mismatch (#1918) 2023-10-31 10:31:34 +01:00
dgtlmoon
a0665e1f18 Fetcher - experimental puppeteer fetch - dont rewrite the proxy protocol (fixes socks5 bug) 2023-10-30 16:08:31 +01:00
dgtlmoon
9ffe7e0eaf Nice format stats (comma sep) 2023-10-29 19:13:15 +01:00
dgtlmoon
3e5671a3a2 Selenium fetcher - Test was on 4.14.1 but documentation was not, change both to 4 (#1912) 2023-10-29 11:43:27 +01:00
dgtlmoon
cd1aca9ee3 0.45.5 2023-10-28 20:20:24 +02:00
dgtlmoon
6a589e14f3 BrowserSteps - Wrong text taken from browser steps (#1911) 2023-10-28 20:19:51 +02:00
dgtlmoon
dbb76f3618 0.45.4 2023-10-28 16:48:10 +02:00
dgtlmoon
4ae27af511 Code cleanup - Browser Steps 2023-10-28 14:58:12 +02:00
dgtlmoon
e1860549dc Fetching - Browser Step enabled watches should also identify 404/non-200 status situations (#1907) 2023-10-28 14:37:42 +02:00
dgtlmoon
9765d56a23 Text Filters - "Extract Text" filter was not being error checked properly when using a RegEx (#1902) 2023-10-26 20:19:59 +02:00
dgtlmoon
349111eb35 Fetching/BrowserSteps - Going to a page was using slightly logic to the main way - make them use the same methods (#1890) 2023-10-26 20:19:22 +02:00
dgtlmoon
71e50569a0 UI - "With errors" tag/button should always show the current tag error count 2023-10-26 19:42:48 +02:00
Marcelo Alencar
c372942295 Build - Add piwheels support for ARMv6 and ARMv7 machines (rPi etc) (#1814) 2023-10-26 13:46:14 +02:00
Marcelo Alencar
0aef5483d9 Upgrade selenium to 4.14.0 (latest) (#1783) 2023-10-26 10:09:03 +02:00
dgtlmoon
c266c64b94 UI - Don't show search icon when logged out (#1896) 2023-10-25 13:31:33 +02:00
dgtlmoon
32e5498a9d UI - Adding handy "limit to watches with errors" button (#1886) 2023-10-23 12:22:43 +02:00
dgtlmoon
0ba7928d58 UI - Viewing text differences - Tweaks to "Jump to next change" button 2023-10-23 11:42:01 +02:00
dgtlmoon
1709e8f936 UI - BrowserSteps - Show the screenshot of an error if it happened on a step, highlight which step had the error to make it easier to find out why the step didnt work, minor fixes to timeouts(#1883) 2023-10-21 09:41:51 +02:00
dgtlmoon
b16d65741c UI - Visual Selector should be the same page-size as Browser Steps (fit inside the browser viewport) 2023-10-20 16:15:17 +02:00
Constantin Hong
1cadcc6d15 Packaging - Enable jq query for filters package installation for darwin (mac) #1868 2023-10-20 15:11:16 +02:00
dgtlmoon
b58d521d19 UI - Adding [stats] tab to watch Edit page (#1880) 2023-10-20 11:49:12 +02:00
dgtlmoon
52225f2ad8 Bugfix - [Clear history] button was not clearing all metadata (#1881) 2023-10-20 11:47:49 +02:00
dgtlmoon
7220afab0a RSS fetch - RSS field <title> was not rendering as text correctly, added workaround #1879 2023-10-19 16:42:05 +02:00
dgtlmoon
1c0fe4c23e PDF Fetching - Handle when the PDF is given as inline content without a proper mime header (#1875) 2023-10-19 13:20:01 +02:00
dgtlmoon
4f6b0eb8a5 Notification library - Bump Apprise notification library to 1.6.0 (#1867) 2023-10-17 22:18:53 +02:00
dgtlmoon
f707c914b6 RSS Fetching - Handle CDATA (commented out text) in RSS correctly, generally handle RSS better (#1866) 2023-10-17 18:34:19 +02:00
dgtlmoon
9cb636e638 UI - Adding mouseover/title to show absolute date/time of a last-change or last-checked date #1860 2023-10-17 14:03:19 +02:00
dgtlmoon
1d5fe51157 UI - Difference text viewer - fixing jump to new difference on changing word/line/etc style 2023-10-17 13:43:58 +02:00
dgtlmoon
c0b49d3be9 Testing - Improve xPath tests (#1863) 2023-10-16 16:48:47 +02:00
dgtlmoon
c4dc85525f UI - Fixing jump to next difference button after refactor 2023-10-14 23:32:18 +02:00
dgtlmoon
26159840c8 UI - Updating proxy tip link 2023-10-14 23:27:41 +02:00
dgtlmoon
522e9786c6 UI - Adding watch label/title to [edit] page title (#1858) 2023-10-13 12:51:31 +02:00
dgtlmoon
9ce86a2835 Documentation - Add note that playwright is not supported on ARM type devices #1856 2023-10-12 10:14:31 +02:00
dgtlmoon
f9f6300a70 UI - Difference page - added 'title' to each change for nice mouse-over information about when the change occured 2023-10-11 16:46:54 +02:00
dgtlmoon
7734b22a19 UI - Difference page - Tweak 'preview' page invite text 2023-10-11 16:31:04 +02:00
dgtlmoon
da421fe110 UI - Ability to select between any difference date ( from / to ) and minor UI cleanup for differences page (#1855) 2023-10-11 16:25:36 +02:00
dgtlmoon
3e2b55a46f UI - Difference page, make the button to find the preview page for triggers and ignored text easier to find 2023-10-11 16:24:32 +02:00
dgtlmoon
7ace259d70 System - No need to run updates on fresh installs (#1854) 2023-10-11 14:04:12 +02:00
dgtlmoon
aa6ad7bf47 UI - Proxy configuration helper notes improvements 2023-10-10 15:41:56 +02:00
dgtlmoon
40dd29dbc6 Preview/Difference page - When sharing the preview/difference page, highlight-to-ignore should login should be required (#1852) 2023-10-10 11:39:44 +02:00
dgtlmoon
7debccca73 Fetching - Clarifying how fetchers work with SOCKS5 proxies 2023-10-09 16:57:30 +02:00
dgtlmoon
59578803bf 0.45.3 2023-10-05 12:29:59 +02:00
dgtlmoon
a5db3a0b99 Update README-pip.md 2023-10-05 12:28:17 +02:00
dgtlmoon
49a5337ac4 Update README.md 2023-10-05 12:24:09 +02:00
dgtlmoon
ceac8c21e4 LD JSON Price followers - Handle incorrectly created LD-JSON price structures better (#1837) 2023-10-04 15:57:55 +02:00
Constantin Hong
a7132b1cfc Dockerfile/fix: Update builder and runner to Python 3.11 (#1781) 2023-10-04 10:46:54 +02:00
dgtlmoon
2b948c15c1 Backend - Regular expression / string filtering refactor for Python 3.11 and deprecation warnings since Python 3.6 (#1786) 2023-10-03 17:44:27 +02:00
dgtlmoon
34f2d30968 Update README.md 2023-10-03 16:29:42 +02:00
dgtlmoon
700729a332 UI - BrowserSteps - Browser Steps interface screen should resize relative to the browser 2023-10-02 18:06:25 +02:00
dgtlmoon
b6060ac90c BrowserSteps - <input> of type 'number' should use 'enter text in field' 2023-10-02 11:50:15 +02:00
dgtlmoon
5cccccb0b6 Restock detect - bumping texts for restock detection 2023-09-26 14:32:39 +02:00
dgtlmoon
c52eb512e8 UI - Proxy Scanner tool should also understand when a filter is empty or contains only an image 2023-09-26 14:29:42 +02:00
dgtlmoon
7282df9c08 UI + Fetching - Improving helper message when filter contains only an image (adding link to more help) 2023-09-26 14:10:07 +02:00
dgtlmoon
e30b17b8bc UI + Fetching - Be more helpful when a filter contains no text, suggest ways to deal with images in filters (#1819) 2023-09-26 13:59:59 +02:00
Marcelo Alencar
1e88136325 Building application - Upgrade test workflows to latest versions (#1817) 2023-09-26 10:18:54 +02:00
dgtlmoon
57de4ffe4f Page fetching - Fixed possible incorrect browser user-agent header in playwright/puppeteer/browserless fetchers (#1811) 2023-09-24 08:42:24 +02:00
dgtlmoon
51e2e8a226 UI - Add extra validation help for notification body with Jinja2 markup (#1810) 2023-09-23 14:50:21 +02:00
dgtlmoon
8887459462 UI - More precise text to describe "current_snapshot" notification token 2023-09-23 14:31:48 +02:00
dgtlmoon
460c724e51 0.45.2 2023-09-22 09:45:55 +02:00
dgtlmoon
dcf4bf37ed Code/Test - Improve testing for creating backups 2023-09-22 09:21:07 +02:00
dgtlmoon
e3cf22fc27 UI - Re-order notification field settings 2023-09-14 14:34:44 +02:00
dgtlmoon
d497db639e UI - Notifications - Tidyup - Hide the notification tokens but show with a button/link 2023-09-14 14:16:08 +02:00
dgtlmoon
7355ac8d21 UI - Notifications - Tweak discord help text 2023-09-14 13:55:48 +02:00
dgtlmoon
2f2d0ea0f2 RSS feeds - Fixing broken links from RSS index in some environments, refactor code (#152, #148, #1684, #1798) 2023-09-14 13:19:45 +02:00
dgtlmoon
a958e1fe20 UI - "recheck all" button should ignore blank/empty "tag" setting when set 2023-09-12 15:13:21 +02:00
dgtlmoon
5dc3b00ec6 Update README.md 2023-09-11 23:27:10 +02:00
dgtlmoon
8ac4757cd9 UI - Fix spelling error 2023-09-11 13:18:05 +02:00
dgtlmoon
2180bb256d UI - Make tgram:// and discord:// examples in notification settings link to how-to pages (#1785) 2023-09-11 10:22:35 +02:00
dgtlmoon
212f15ad5f Catch possible crash scenario for listing watches - date_created was missing on add (#1787) 2023-09-10 13:44:24 +02:00
dgtlmoon
22b2068208 Ability to select "No proxy" for a watch when you have proxy's configured 2023-09-08 14:14:47 +02:00
dgtlmoon
4916043055 Updating notification library - Adds support for Pushy, PushDeer, PushMe and Matrix attachment support (screenshots) 2023-09-08 12:40:23 +02:00
dgtlmoon
7bf13bad30 Update README.md 2023-09-07 16:46:21 +02:00
dgtlmoon
0aa2276afb UI - Fixing update for sort by "date created" or "#" in watch overview table ( #1775 ) 2023-09-07 10:36:34 +02:00
Tiago Ilieve
3b875e5a6a Add 'diff_patch' notification body token - This will allow the diff to be generated in the "unified patch format." (#1765) 2023-09-07 08:55:06 +02:00
Constantin Hong
8ec50294d2 README/docs: Clarifying xpath version changedetection.io uses (#1773) 2023-09-07 08:49:26 +02:00
dgtlmoon
e3c9255d9e 0.45.1 2023-09-06 12:27:56 +02:00
dgtlmoon
3b03bdcb82 UI - Fixing open/broken HTML which was causing some buttons to not display 2023-09-06 12:27:27 +02:00
dgtlmoon
e25792bcec 0.45 2023-09-06 09:46:27 +02:00
dgtlmoon
bf4168a2aa Adding Oxylabs proxy recommendation to proxy config page (#1756) 2023-09-06 09:43:23 +02:00
dgtlmoon
9d37eaa57b Fix - Link in the RSS feed was showing the path twice (when used in reverse proxy) 2023-09-05 17:28:13 +02:00
dgtlmoon
40d01acde9 Fix - Regular Expression text in ignore and trigger were not processing correctly, also refactored for lower CPU usage (#1747) 2023-09-05 13:07:17 +02:00
Ikko Eltociear Ashimine
d34832de73 Fix typo in README.md (#1759) 2023-09-04 16:40:12 +02:00
dgtlmoon
ed4bafae63 UI - "Test notification" button in "Group Tag" settings page was broken due to missing variable #1753 2023-08-31 13:29:38 +02:00
dgtlmoon
3a5bceadfa UI - Clicking 'ignore text' when highlighting text should clear the preview text button/area. #1754 2023-08-31 13:24:19 +02:00
dgtlmoon
6abdf2d332 Update documentation - How to set number of concurrent fetchers 2023-08-30 18:02:10 +02:00
dgtlmoon
dee23709a9 0.44.2 2023-08-28 19:01:59 +02:00
dgtlmoon
52df3b10e7 UI - Ability to highlight text and have it offered as a ignore-text option, really nice easy way to set ignores on changing text (#1746) 2023-08-24 14:29:48 +02:00
dgtlmoon
087d21c61e Update README.md 2023-08-22 11:36:15 +02:00
dgtlmoon
171faf465c Enable ARMv8 builds (for RaspberryPi and other portable devices) (#1733) 2023-08-13 23:33:49 +02:00
dgtlmoon
a3d8bd0b1a Updating in app links 2023-08-13 18:35:58 +02:00
dgtlmoon
6ef8a1c18f Updating URL validation library, ability to block access to simple (no dot) hostnames like "localhost" with BLOCK_SIMPLEHOSTS setting (#1732) 2023-08-13 18:27:55 +02:00
Marcelo Alencar
126f0fbf87 Re-enable ARMv6 builds (for Raspberry and other portable devices) (#1724) 2023-08-07 15:48:33 +02:00
dgtlmoon
cfa712c88c 0.44.1 2023-08-02 08:55:07 +02:00
dgtlmoon
6a6ba40b6a Re-enable ARMv7 builds (for Raspberry and other portable devices) 2023-08-01 17:10:24 +02:00
dgtlmoon
e7f726c057 UI - Fixing darkmode switch icon 2023-07-24 14:06:40 +02:00
dgtlmoon
df0cc7b585 0.44 2023-07-17 18:03:42 +02:00
dgtlmoon
76cd98b521 Updating AppRise notification library, Improved pover, ntfy support, whatsapp updates, Pagertree support, Voip.ms support, Misskey support, plus many fixes and improvements. 2023-07-17 17:32:12 +02:00
dgtlmoon
f84ba0fb31 API - Updating API description for handling a single watch 2023-07-17 17:19:41 +02:00
dgtlmoon
c35cbd33d6 Removing docker build for RaspberryPi (armv6/armv7) for now due to packaging problems 2023-07-17 17:10:29 +02:00
dgtlmoon
661f7fe32c Proxy scan improvements - handle custom proxies, dont restart when a scan is already running (#1689) 2023-07-11 16:48:50 +02:00
dgtlmoon
7cb7eebbc5 Browser Steps - When cleaning up old screenshots, check the file exists 2023-07-11 10:44:54 +02:00
dgtlmoon
aaceb4ebad Scan/Recheck proxies - Report filter not found as "OK" but with warning 2023-07-11 10:44:21 +02:00
dgtlmoon
56cf6e5ea5 Bug fix - Previously encountered fetch errors were sometimes not being cleared (#1687) 2023-07-11 09:23:41 +02:00
dgtlmoon
1987e109e8 New feature - Helper button to trigger a scan/access test of all proxies for a particular watch (#1685) 2023-07-10 16:08:45 +02:00
dgtlmoon
20d65cdd26 0.43.2 2023-06-30 22:57:05 +02:00
dgtlmoon
37ff5f6d37 Bug - SMTP mailto:// Notification content-type (HTML/Text) fix and add CI tests (#1660) 2023-06-30 21:35:35 +02:00
dgtlmoon
2f777ea3bb Fix - Watches werent falling back to global default formats correctly when required (#1656) 2023-06-28 00:03:02 +02:00
dgtlmoon
e709201955 0.43.1 2023-06-27 18:28:18 +02:00
dgtlmoon
572f71299f Bug fix - Notification settings were not cascading from global -> tags -> watch correctly in some cases (#1654) 2023-06-27 18:27:33 +02:00
dgtlmoon
5f150c4f03 Bug - Fix watch clone (#1647) 2023-06-27 17:05:32 +02:00
dgtlmoon
8cbf8e8f57 UI - Dont allow empty tag names (#1641) 2023-06-22 18:17:41 +02:00
dgtlmoon
0e65dda5b6 0.43 2023-06-21 14:35:08 +02:00
dgtlmoon
72a415144b UI - Watch Table - Clicking anywhere on the watch list row table also activates the operations buttons and checkbox 2023-06-21 14:07:28 +02:00
dgtlmoon
52f2c00308 UI/Functionality - Ability to manage/apply filters and notifications across tags/groups 2023-06-19 23:29:13 +02:00
Raymond Ha
72311fb845 UI - Fixes to dark mode toggle (#1629) 2023-06-18 00:04:04 +02:00
Jeong-Hee Kang
f1b10a22f8 Docker container updates - use specific debian version (libssl1 vs libssl3) (#1630) 2023-06-18 00:03:42 +02:00
dgtlmoon
a4c620c308 Code - Adding CI test for search (#1626) 2023-06-13 15:03:32 +02:00
dgtlmoon
9434eac72d 0.42.3 2023-06-12 15:28:51 +02:00
dgtlmoon
edb5e20de6 Bug fix - Fixed crash when deleting watch from UI when watch was already manually deleted from datadir (#1623) 2023-06-12 15:10:48 +02:00
dgtlmoon
e62eeb1c4a README - Update links to new website 2023-06-02 18:58:06 +02:00
Maciej Rapacz
a4e6fd1ec3 Fetcher / Parser - Automatically attempt to extract JSON from document when document contains JSON but could be wrapped in HTML (#1593) 2023-05-30 08:57:17 +02:00
dgtlmoon
d8b9f0fd78 Test improvement - Also test that custom request headers works with Playwright/Browserless (#1607) 2023-05-29 17:44:38 +02:00
dgtlmoon
f9387522ee Fetching - Be sure that content-type detection works when the headers are a mixed case (#1604) 2023-05-29 16:11:43 +02:00
dgtlmoon
ba8d2e0c2d UI/Fetching - Update "Filter not found" message to be more explanatory/helpful (#1602) 2023-05-28 12:09:51 +02:00
dgtlmoon
247db22a33 Restock monitor - Updating texts for tickets available/unavailable restock detection 2023-05-27 13:31:35 +02:00
William
aeabd5b3fc Docs - Update README.md (Changed LXML re:math reference to re:match) (#1594) 2023-05-25 16:55:52 +02:00
dgtlmoon
e9e1ce893f 0.42.2 2023-05-25 16:47:30 +02:00
dgtlmoon
b5a415c7b6 UI - Configurable pager size #1599 #1598 2023-05-25 16:38:54 +02:00
dgtlmoon
9e954532d6 Fetcher - Ability to specify headers from a textfile per watch, global or per tag ( https://github.com/dgtlmoon/changedetection.io/wiki/Adding-headers-from-an-external-file ) 2023-05-22 17:19:52 +02:00
dgtlmoon
955835df72 Restock detection - Better reporting when it fails (#1584) 2023-05-21 23:10:39 +02:00
dgtlmoon
1aeafef910 Fetcher - Puppeteer experimental fetcher wasn't returning the status-code (#1585) 2023-05-21 23:10:08 +02:00
dgtlmoon
1367197df7 Update README.md 2023-05-21 21:28:19 +02:00
dgtlmoon
143971123d 0.42.1 2023-05-21 14:20:23 +02:00
dgtlmoon
04d2d3fb00 Fetcher fix - Clear any fetch error when the fetched document was the same (clear any error that occurred between fetching a document that was the same) 2023-05-21 12:14:18 +02:00
dgtlmoon
236f0c098d 0.42 2023-05-18 22:10:10 +02:00
dgtlmoon
582c6b465b UI - "Search List" also works for 'Title' field 2023-05-18 19:24:13 +02:00
dgtlmoon
a021ba87fa UI - New "Search List" icon and functionality (#1580) 2023-05-18 18:58:49 +02:00
dgtlmoon
e9057cb851 VisualSelector - Add message when first version cannot be found 2023-05-15 16:57:39 +02:00
dgtlmoon
72ec438caa UI - update link to official project page 2023-05-15 13:31:30 +02:00
dgtlmoon
367dec48e1 BrowserSteps - Dont highlight elements that are the full page width (body, wrappers etc) 2023-05-15 10:43:33 +02:00
dgtlmoon
dd87912c88 BrowserSteps - Support for float seconds (0.5 etc) 2023-05-15 10:35:25 +02:00
dgtlmoon
0126cb0aac BrowserSteps - Session keep alive timer countdown fix 2023-05-13 00:30:37 +02:00
dgtlmoon
463b2d0449 BrowserSteps - adding setup check 2023-05-12 15:41:00 +02:00
dgtlmoon
e4f6d54ae2 BrowserSteps - Refactored to re-use playwright context which should solve some errors 2023-05-12 15:38:55 +02:00
dgtlmoon
5f338d7824 BrowserSteps - Be sure to select the most appropriate input/button/a when an input element is wrapped in a <div> or other 2023-05-12 10:35:18 +02:00
dgtlmoon
0b563a93ec Fetcher - Experimental fetcher - dont cache embedded data URLs 2023-05-11 16:52:32 +02:00
dgtlmoon
d939882dde Fetcher - Experimental fetcher improvements (Code TidyUp, Improve tests, revert to old playwright when using BrowserSteps for now) (#1564) 2023-05-11 16:36:35 +02:00
dgtlmoon
690cf4acc9 BrowserSteps - Include nice big start button SVG 2023-05-11 16:34:50 +02:00
dgtlmoon
3cb3c7ba2e BrowserSteps - Remove unreliable method for detecting if the element has a "click" listener and default to click (switch to 'Click X,Y' will return the right co-ords anyway) 2023-05-11 16:26:46 +02:00
dgtlmoon
5325918f29 Puppeteer fetcher, adding disk cache and other fixes (#1563) 2023-05-10 23:23:34 +02:00
dgtlmoon
8eee913438 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2023-05-07 14:19:38 +02:00
dgtlmoon
06921d973e UI - Adding shortcut list select button for "clear/reset history" 2023-05-07 14:19:30 +02:00
dgtlmoon
316f28a0f2 Fetcher - Experimental fetcher fixes, now only enabled with 'USE_EXPERIMENTAL_PUPPETEER_FETCH' env var (default off) (#1561) 2023-05-07 13:49:53 +02:00
dgtlmoon
3801d339f5 UI - Adding shortcut list select button for "clear/reset history" 2023-05-07 13:47:17 +02:00
dgtlmoon
d814535dc6 Element scraper - wrap offset detection in try/catch 2023-05-07 13:15:38 +02:00
dgtlmoon
cf3f3e4497 BrowserSteps - BrowserSteps was not always following proxy information 2023-05-07 13:15:29 +02:00
dgtlmoon
ba76c2a280 BrowserSteps - remove minor delay 2023-05-07 13:15:20 +02:00
dgtlmoon
94f38f052e Fetcher - playwright/browserless - Use builtin node puppeteer handler in browserless, scales way better, and is faster (#1559) 2023-05-05 21:58:08 +02:00
Raymond Ha
1710885fc4 UI - Fix back navigation / browser history (#1556) 2023-05-04 16:54:04 +02:00
dgtlmoon
2018e73240 UI - HTML validation improvements for edit forms (#1553) 2023-04-30 10:38:50 +02:00
dgtlmoon
fae8c89a4e UI - Various minor HTML validation fixes 2023-04-29 22:29:57 +02:00
dgtlmoon
40988c55c6 UI - pagination - use count including tag filter for pagination display 2023-04-29 20:19:18 +02:00
dgtlmoon
5aa713b7ea UI - Notifications - Adding icon to "Add Email" button 2023-04-29 20:14:42 +02:00
dgtlmoon
e1f5dfb703 UI - Adding pagination to watch list (#1549) 2023-04-29 19:24:13 +02:00
dgtlmoon
966600d28e UI - Set selected watches as 'viewed' (#1550) 2023-04-29 19:20:19 +02:00
dgtlmoon
e7ac356d99 UI - Fix missing </span> in watch list when using restock detection 2023-04-29 18:44:57 +02:00
dgtlmoon
e874df4ffc UI - Make sort order and type sticky in cookies, ability to sort by watch created time (#1519) 2023-04-29 17:44:23 +02:00
dgtlmoon
d1f44d0345 Notifications - Send test notification should use system defaults for body and title if not set in watch (#1547 #1503) 2023-04-29 16:20:01 +02:00
dgtlmoon
8536af0845 Adding generic changedetection.io SVG icon #1527 2023-04-14 09:50:55 +02:00
dgtlmoon
9076ba6bd3 Tests - error test - be sure to clear results from other test parts 2023-04-06 16:12:18 +02:00
dgtlmoon
43af18e2bc Update README.md 2023-04-06 15:26:06 +02:00
dgtlmoon
ad75e8cdd0 Tests - Add test to check that low level fetch errors are cleared on next check 2023-04-06 14:46:08 +02:00
dgtlmoon
f604643356 Restock alerts - adding extra detection texts 2023-04-06 13:51:33 +02:00
dgtlmoon
d5fd22f693 Restock monitor - Identify the cases where the product is also definitely in stock (#1489) 2023-03-23 18:34:56 +01:00
dgtlmoon
1d9d11b3f5 Automated CI test for ensuring pypi package was built correctly (#1488) 2023-03-23 12:20:18 +01:00
dgtlmoon
f49464f451 GitHub container build - 'provenance' was disabled 2023-03-22 10:40:49 +01:00
dgtlmoon
bc6bde4062 0.41.1 2023-03-21 23:16:01 +01:00
dgtlmoon
2863167f45 Fix for pip installations 2023-03-21 23:15:53 +01:00
dgtlmoon
ce3966c104 0.41 2023-03-21 20:30:21 +01:00
dgtlmoon
d5f574ca17 Notifications - Include triggered text token as {{triggered_text}} in notifications, so you can send just the content that matches. (#1485) 2023-03-21 19:16:13 +01:00
dgtlmoon
c96ece170a Notification tokens - add comment that the {{tokens}} can be used in the URLs also 2023-03-21 19:04:12 +01:00
dgtlmoon
1fb90bbddc Quick add form - adjust font size and rename stock recheck 2023-03-20 20:19:32 +01:00
dgtlmoon
55b6ae86e8 Ability to set which text to process triggers on (added, removed, changed) according to the difference (#1483) 2023-03-20 20:16:57 +01:00
dgtlmoon
66b892f770 Restock / stock / out of stock monitor - bumping detection texts 2023-03-20 15:01:52 +01:00
dgtlmoon
3b80bb2f0e Use brotli for reducing the size of the text snapshots (#1482) 2023-03-19 21:12:22 +01:00
dgtlmoon
e6d2d87b31 Notification screenshots - now PNG only for now to save disk space (no point creating two images) (#1481) 2023-03-18 20:52:52 +01:00
dgtlmoon
6e71088cde New feature - Restock / stock / out of stock monitor option/mode 2023-03-18 20:36:26 +01:00
dgtlmoon
2bc988dffc UI - Clone/copy watch - A paused watch should not be checked when copied/cloned #1471. 2023-03-17 23:58:15 +01:00
dgtlmoon
a578de36c5 Update README.md 2023-03-17 16:56:29 +01:00
dgtlmoon
4c74d39df0 Code - Abstract out the diff fetch types to make it easier to integrate new ones (#1467) 2023-03-12 18:11:53 +01:00
dgtlmoon
c454cbb808 BrowserSteps - Adding Goto URL step 2023-03-12 17:22:56 +01:00
dgtlmoon
6f1eec0d5a Fixing bad linebreak definition </br> in notifications and UI (#1465) 2023-03-12 17:05:34 +01:00
reecespieces
0d05ee1586 Notification Improvements - New tokens {{diff_added}} and {{diff_removed}}, removed whitespace around added and into ( Issue #905 ) (#1454) 2023-03-12 16:21:47 +01:00
dgtlmoon
23476f0e70 Update README.md 2023-03-01 23:13:35 +01:00
dgtlmoon
cf363971c1 Bug - False change alerts - code cleanups Re #962 (#1444) 2023-02-28 18:04:58 +01:00
dgtlmoon
35409f79bf Update README.md 2023-02-28 14:55:43 +01:00
dgtlmoon
fc88306805 Be sure that process_changedetection_results is off after PageUnloadable and EmptyReply exceptions from fetcher - Re #962 (#1439) 2023-02-26 13:54:14 +01:00
dgtlmoon
8253074d56 False change alerts fix - Don't reset watch checksum when a fetch error happens, adjust test to not test for fluctuating filter (#1437) 2023-02-25 22:14:47 +01:00
Fabian Affolter
5f9c8db3e1 Library update - Replace bs4 with beautifulsoup4 (#1433) 2023-02-25 22:06:13 +01:00
dgtlmoon
abf234298c API - Including last_changed timestamp in watch API info (#1436) 2023-02-25 22:00:46 +01:00
Hmmbob
0e1032a36a Update apprise to 1.3.0 (#1430) 2023-02-25 21:06:12 +01:00
dgtlmoon
3b96e40464 API documentation - improving example for list watches 2023-02-22 23:43:14 +01:00
dgtlmoon
c747cf7ba8 API documentation - improving example for snapshot history 2023-02-22 23:40:16 +01:00
dgtlmoon
3e98c8ae4b API - Adding current version to 'System Information' endpoint, bumping API docs, Re #1429 2023-02-22 23:34:36 +01:00
dgtlmoon
aaad71fc19 Further improving API documentation Re #1426 2023-02-22 21:30:02 +01:00
dgtlmoon
78f93113d8 Improving API documentation Re #1426 2023-02-22 20:57:01 +01:00
dgtlmoon
e9e586205a Browser Steps - Adding "Wait for text" and "Wait for text in element" Re #1427 2023-02-22 20:10:21 +01:00
dgtlmoon
89f1ba58b6 Re #1382 - UI fix - sorting now works with selected tag 2023-02-17 20:39:18 +01:00
dgtlmoon
6f4fd011e3 Dont rewrite/resave snapshot when its the same data, just bump the history index, saves disk space. (#1414) 2023-02-17 17:15:27 +01:00
dgtlmoon
900dc5ee78 Fetching - False alerts issue #962 - be sure to avoid triggering changedetection when checksums were the same (#1410) 2023-02-17 16:59:03 +01:00
dgtlmoon
7b8b50138b Deleting a watch now removes the entire watch storage directory (#1408) 2023-02-11 14:10:54 +01:00
dgtlmoon
01af21f856 Use year/date in the backup snapshot zip filename instead of epoch seconds (#1377 #1407) 2023-02-11 13:44:16 +01:00
dgtlmoon
f7f4ab314b PDF text conversion - fix bug where it detected a site as a PDF file incorrectly Re #1392 #1393 2023-02-08 09:32:57 +01:00
dgtlmoon
ce0355c0ad Remove unused code (#1394) 2023-02-08 09:32:15 +01:00
dgtlmoon
0f43213d9d UI - preview page - Fix bug where playwright/chrome was system default and [preview] didnt show snapshot 2023-02-07 16:55:34 +01:00
dgtlmoon
93c57d9fad Adding example docker-compose.yml config to ignore errors from self-signed certs #1389 2023-02-06 17:24:12 +01:00
dgtlmoon
3cdd075baf 0.40.2 2023-02-03 19:20:13 +01:00
dgtlmoon
5c617e8530 Code cleanup - remove unused import 2023-02-03 18:35:58 +01:00
dgtlmoon
1a48965ba1 UI fix - Fix logic for showing screenshot on diff page (#1379) 2023-02-03 11:23:48 +01:00
dgtlmoon
41856c4ed8 Re #1365 - Playwright - Browser "Service Workers" should be enabled by default but unset via env var PLAYWRIGHT_SERVICE_WORKERS=block (#1367) 2023-02-01 20:50:40 +01:00
dgtlmoon
0ed897c50f New setting to allow passwordless access to your 'diff' page - perfect for sharing your diff page securely, refactored login code (#1357) 2023-01-29 22:36:55 +01:00
dgtlmoon
f8e587c415 Security - Possible stored XSS in watch list - Only permit HTTP/HTTP/FTP by default - override with env var SAFE_PROTOCOL_REGEX (#1359) 2023-01-29 11:12:06 +01:00
dgtlmoon
d47a25eb6d Playwright - Removing old bug fix where playwright needed screenshot called twice to make the full screen screenshot be actually fullscreen (#1356) 2023-01-28 15:02:53 +01:00
dgtlmoon
9a0792d185 Fetch backend UI default fixes for VisualSelector and BrowserSteps (#1344) 2023-01-25 19:47:54 +01:00
dgtlmoon
948ef7ade4 Fix fetch UI default fetch backend option icon (#1343) 2023-01-25 18:07:44 +01:00
dgtlmoon
0ba139f8f9 Docker container build - docker container buildx version change causing errors with watchtower and others (#1336) 2023-01-24 23:45:43 +01:00
dgtlmoon
a9431191fc 0.40.1.1 2023-01-22 13:03:15 +01:00
dgtlmoon
774451f256 Re #1328 - add -6 flag to enable IPv6 (#1329) 2023-01-22 11:10:25 +01:00
dgtlmoon
04577cbf32 0.40.1.0 2023-01-21 15:38:54 +01:00
dgtlmoon
f2864af8f1 Update README.md 2023-01-21 14:02:14 +01:00
dgtlmoon
9a36d081c4 Setting docker-compose.yml version to 3.2 so it works with portainer and others #1306 #1144 #1079 2023-01-21 13:50:36 +01:00
dgtlmoon
7048a0acbd UI - Fix wrong logic when dealing with webdriver/playwright watch screenshot settings (#1325) 2023-01-21 13:47:32 +01:00
dgtlmoon
fba719ab8d Ability for watch to use a more obvious system default fetcher (#1320) 2023-01-19 21:57:58 +01:00
dgtlmoon
7c5e2d00af Update README.md 2023-01-17 22:02:51 +01:00
dgtlmoon
02b8fc0c18 pip - eventlet doesnt support dnspython >=2.3.0 (Fixes build error) 2023-01-17 22:01:56 +01:00
dgtlmoon
de15dfd80d Reliability fix - Remove loop that could cause app to stop checking if data changes (#1313) 2023-01-15 16:12:47 +01:00
dgtlmoon
024c8d8fd5 API - Improvements, support PUT for updating existing watch, set muted state, set paused state, see https://changedetection.io/docs/api_v1/index.html (#1213) 2023-01-10 19:00:57 +01:00
dgtlmoon
fab7d325f7 Data storage - Don't recreate DB if its corrupt, exit with error cleanly so operator can look into the problem (#1296) 2023-01-08 14:47:31 +01:00
jtagcat
58c7cbeac7 UI: Updating queued success message (#1285) 2023-01-05 21:12:02 +01:00
Abhishek Malani
ab9efdfd14 README.md - Fix release link (#1277) 2022-12-29 11:06:51 +01:00
Hmmbob
65d5a5d34c Notifications: updating apprise (slack notification fixes and others) (#1272) 2022-12-28 18:34:55 +01:00
dgtlmoon
93c157ee7f Remove docker-compose version so it works on any modern version #1144 (#1268) 2022-12-26 20:37:31 +01:00
Bill Metangmo
de85db887c Update the docker compose file to any version (#1079) (#1144) 2022-12-26 20:36:42 +01:00
dgtlmoon
50805ca38a IPv6 support for listening on (#1267) 2022-12-26 20:36:16 +01:00
dgtlmoon
fc6424c39e Test improvements (#1264) 2022-12-26 14:17:40 +01:00
dgtlmoon
f0966eb23a 0.40.0.4 2022-12-25 18:25:45 +01:00
dgtlmoon
e4fb5ab4da UI - Suggest adding proxy for watch when 403 access denied is reached (#1260) 2022-12-23 22:26:24 +01:00
dgtlmoon
e99f07a51d Filters & Notifications - fixed tokens in filter not found notification 2022-12-22 10:05:17 +01:00
dgtlmoon
08ee223b5f UI - Fix broken html tags in settings page 2022-12-20 18:57:26 +01:00
dgtlmoon
572f9b8a31 Proxy Settings in UI - TidyUp BrightData text 2022-12-20 10:08:16 +01:00
dgtlmoon
fcfd1b5e10 Ability to configure extra proxies via the UI (#1235) 2022-12-19 21:48:01 +01:00
dgtlmoon
0790dd555e Docker container updates - use Python 3.10, remove unused packages 2022-12-19 20:46:02 +01:00
dgtlmoon
0b20dc7712 Tidy up list icons a bit (#1250) 2022-12-19 20:30:32 +01:00
dgtlmoon
13c4121f52 PDF File change detection - Initial PDF fetcher support with basic text extraction (#1244) 2022-12-19 17:51:41 +01:00
dgtlmoon
e8e176f3bd Testing - Run test as fully built docker container (#1245) 2022-12-19 14:41:34 +01:00
dgtlmoon
7a1d2d924e Dark mode - system setting var is not required (its cookie based) 2022-12-19 14:13:57 +01:00
dgtlmoon
c3731cf055 0.40.0.3 2022-12-19 12:41:52 +01:00
dgtlmoon
a287e5a86c Visual Selector - Select smallest/most precise element first, better filtering of zero size elements 2022-12-19 12:33:31 +01:00
dgtlmoon
235535c327 Fetching - Check the most overdue watch first (#1242) 2022-12-17 15:40:57 +01:00
dgtlmoon
44dc62da2d Overview list - Checkbox action "Recheck" 2022-12-16 18:35:09 +01:00
dgtlmoon
0c380c170f Playwright - Better error reporting and re-try fetch on fail once (#1238) 2022-12-16 18:06:14 +01:00
dgtlmoon
b7a2501d64 Fetching - Always sort the key order of JSON content for less false alerts (May cause an alert on upgrade, but will be better going forwards) #1219 2022-12-15 09:13:09 +01:00
dgtlmoon
e970fef991 Fetcher + VisualSelector - xPath filter with attribute filter was breaking the element finder 2022-12-14 19:06:49 +01:00
dgtlmoon
b76148a0f4 Fetcher - CPU usage - Skip processing if the previous checksum and the just fetched one was the same (#925) 2022-12-14 15:08:34 +01:00
dgtlmoon
93cc30437f Playwright+BrowserSteps - Fetch changes - Fetch simply after page starts rendering + delay seconds, disable service workers 2022-12-14 12:16:04 +01:00
dgtlmoon
6562d6e0d4 Improve ARM/rust build comment 2022-12-13 12:28:20 +01:00
dgtlmoon
6c217cc3b6 README.md - Improving JSONPath example for LD+JSON product data 2022-12-11 11:14:52 +01:00
dgtlmoon
f30cdf0674 0.40.0.2 2022-12-08 22:36:59 +01:00
dgtlmoon
14da0646a7 Price follower - Dont scan for ldjson data when 'no' was clicked on the suggestion (#1207) 2022-12-08 22:35:37 +01:00
dgtlmoon
b413cdecc7 Adding missing parts for pip build Re #1206 2022-12-08 21:54:55 +01:00
dgtlmoon
7bf52d9275 0.40.0 2022-12-08 20:09:42 +01:00
dgtlmoon
09e6624afd VisualSelector - Exclude items that are not interactable or visible 2022-12-08 20:08:41 +01:00
dgtlmoon
b58fd995b5 Automatically offer to track LD+JSON product price data (#1204) 2022-12-08 19:28:20 +01:00
dgtlmoon
f7bb8a0afa UI - favicon callback no longer needed 2022-12-07 12:14:36 +01:00
dgtlmoon
3e333496c1 Test cleanups (#1196) 2022-12-07 12:03:28 +01:00
Amro Hendawi
ee776a9627 Update runtime.txt (#1198) 2022-12-07 00:17:58 +01:00
dgtlmoon
65db4d68e3 Dark mode - HTML template tidy up (#1197) 2022-12-06 23:50:49 +01:00
dgtlmoon
74d93d10c3 UI - watch tags also known as watch tag / label 2022-12-06 23:16:22 +01:00
dgtlmoon
37aef0530a Notification templates - bug in update, was updating the main system instead of the watch notification_title incorrectly 2022-12-06 18:29:09 +01:00
dgtlmoon
f86763dc7a Extract data - minor improvement to example 2022-12-06 10:53:23 +01:00
dgtlmoon
13c25f9b92 Darkmode - Pause/Mute notification colour fix, re #1195 2022-12-06 10:49:24 +01:00
dgtlmoon
265f622e75 Notification - Support for standard API calls post:// posts:// get:// gets:// delete:// deletes:// put:// puts:// (#1194) 2022-12-05 20:49:08 +01:00
dgtlmoon
c12db2b725 Notifications - tokens/jinja2 templating (#1184) 2022-12-05 19:58:43 +01:00
dgtlmoon
a048e4a02d Dark mode - more colour fixes 2022-12-05 19:10:36 +01:00
dgtlmoon
69662ff91c Test improvement - improving notification error network test 2022-12-05 17:45:30 +01:00
dgtlmoon
fc94c57d7f Extract text as CSV - Extra validation (#1192) 2022-12-05 16:36:00 +01:00
dgtlmoon
7b94ba6f23 Dark mode - make watch list easier to read when theres 'unviewed' entries 2022-12-05 15:13:47 +01:00
dgtlmoon
2345b6b558 New feature - Simple extract data by regex from all historical watch text into CSV (#1191) 2022-12-05 14:48:03 +01:00
dgtlmoon
b8d5a12ad0 UI - Cursor over labels should be pointer 2022-12-05 10:42:48 +01:00
dgtlmoon
9e67a572c5 Dark mode - Make watches with errors easier to read 2022-12-05 09:53:53 +01:00
dgtlmoon
378d7b7362 Dark mode - cookie path should be all site 2022-12-04 20:54:15 +01:00
dgtlmoon
d1d4045c49 Tweaks - adding hover/title to dark mode button 2022-12-04 18:53:56 +01:00
dgtlmoon
77409eeb3a UI - Dark Mode (#1187) 2022-12-04 16:39:25 +01:00
peppetemp
87726e0bb2 docker-compose - Add playwright/selenium container dependencies example (#1178) 2022-12-02 16:13:59 +01:00
dgtlmoon
72222158e9 BrowserSteps - Can be shared by the watch share link 2022-12-02 09:36:13 +01:00
dgtlmoon
1814924c19 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-12-01 23:48:04 +01:00
dgtlmoon
8aae4197d7 UI - Make tabs hoverable 2022-12-01 23:47:51 +01:00
dgtlmoon
3a8a41a3ff Favicon multiplatform and path fix/update (#1176) 2022-12-01 23:29:53 +01:00
dgtlmoon
64caeea491 BrowserSteps - Cleanup interface on shutdown 2022-12-01 23:28:20 +01:00
dgtlmoon
3838bff397 BrowserSteps - More work on cleaner shutdowns of browser session 2022-12-01 23:08:28 +01:00
dgtlmoon
55ea983bda BrowserSteps - Forcefully shutdown playwright to prevent any race-conditions waiting for it to shutdown 2022-12-01 19:32:05 +01:00
dgtlmoon
b4d79839bf BrowserSteps - Make the UI require an extra step so it doesnt slow down the experience when clicking through the tabs (#1175) 2022-11-30 19:40:15 +01:00
dgtlmoon
0b8c3add34 BrowserSteps - Use correct mimetype for screenshot update 2022-11-29 14:07:53 +01:00
dgtlmoon
51d57f0963 BrowserSteps - Faster screenshot updates and enable gzip compression for all content replies in the UI (#1171) 2022-11-29 13:55:53 +01:00
dgtlmoon
6d932149e3 BrowserSteps - Add 'Execute JS' step 2022-11-29 09:09:26 +01:00
dgtlmoon
2c764e8f84 BrowserSteps - Also try to find clickable div/spans 2022-11-29 08:46:11 +01:00
dgtlmoon
07765b0d38 Update README.md 2022-11-28 20:55:18 +01:00
dgtlmoon
7c3faa8e38 Update README.md 2022-11-28 19:24:10 +01:00
dgtlmoon
4624974b91 BrowserSteps - Element finder filter (offpage) should also calculate top scroll offset 2022-11-28 18:04:02 +01:00
dgtlmoon
991841f1f9 Visual Selector and BrowserSteps - More accurate element detection when the page auto-scrolls on load Re #1169 2022-11-28 17:31:50 +01:00
dgtlmoon
e3db324698 Extra validation for URLs with template markup (#1166) 2022-11-27 16:18:11 +01:00
dgtlmoon
0988bef2cd Browser Steps - adding 'please wait' text while loading 2022-11-27 11:41:41 +01:00
dgtlmoon
5b281f2c34 Re #1163 psutil missing from pip requirements 2022-11-26 00:32:57 +01:00
dgtlmoon
a224f64cd6 Update README.md 2022-11-25 11:16:02 +01:00
dgtlmoon
7ee97ae37f Update README.md 2022-11-25 11:14:29 +01:00
dgtlmoon
69756f20f2 VisualSelector & BrowserSteps - Scraper improvements, remove duplicate code 2022-11-25 10:45:38 +01:00
dgtlmoon
326b7aacbb Bumping VisualSelector example animation 2022-11-25 10:02:18 +01:00
dgtlmoon
fde7b3fd97 Remove dupe xpath finder prep code 2022-11-25 09:25:05 +01:00
dgtlmoon
9d04cb014a Browsersteps 'Beta' label image path fix 2022-11-25 09:14:19 +01:00
dgtlmoon
5b530ff61c Configurable "Browser Steps" when Playwright/Chrome is configured (enter text, scroll, wait for text, click button etc) (#478) 2022-11-24 20:53:01 +01:00
Maeglin
c98536ace4 Update README.md - Make docker instructions easier to follow on Windows (#1158) 2022-11-23 14:42:36 +01:00
dgtlmoon
463747d3b7 0.39.22.1 2022-11-22 18:09:25 +01:00
dgtlmoon
791bdb42aa Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-11-22 18:09:03 +01:00
dgtlmoon
ce6c2737a8 Notification screenshot/JPEG was not being regenerated correctly (#1149) 2022-11-22 18:08:46 +01:00
dgtlmoon
ade9e1138b Re #1148 - Notification screenshot/JPEG was not being regenerated correctly 2022-11-22 17:41:06 +01:00
dgtlmoon
68d5178367 Update README.md 2022-11-21 00:24:15 +01:00
dgtlmoon
41dc57aee3 Update README.md 2022-11-21 00:20:55 +01:00
dgtlmoon
943704cd04 0.39.22 2022-11-20 16:29:16 +01:00
dgtlmoon
883561f979 Fix dangling HTML tag from screenshot notification 2022-11-20 16:04:26 +01:00
dgtlmoon
35d44c8277 Notification screenshot option should only be available to webdriver/playwright watches, screenshot sent as JPEG to save bandwidth, Simplify the logic around screenshot, (#1140) 2022-11-20 14:40:41 +01:00
dgtlmoon
d07d7a1b18 Minor test improvements 2022-11-20 11:35:35 +01:00
Matthias Bilger
f066a1c38f Option to attach screenshot to notification (#1127) 2022-11-20 09:37:48 +01:00
dgtlmoon
d0d191a7d1 VisualFilter - check previously set filters were set before highlighting 2022-11-19 17:37:51 +01:00
dgtlmoon
d7482c8d6a Add diff view option for JSON compare (comparing the fields defined on each. The order of fields, etc does not matter in this comparison.) 2022-11-19 15:17:09 +01:00
dgtlmoon
bcf7417f63 Update visual text difference library, add option to ignore whitespace when viewing diff (#1137) 2022-11-19 15:08:27 +01:00
dgtlmoon
df6e835035 Make VisualSelector show first available multiple selector, refactor to make more maintainable (#1132) 2022-11-17 11:52:48 +01:00
dgtlmoon
ab28f20eba Make link to notification debug log easier to find (#1130) 2022-11-16 09:17:57 +01:00
Hmmbob
1174b95ab4 Bump notification library (#1128) 2022-11-15 22:54:12 +01:00
dgtlmoon
a564475325 Re #1126 HIDE_REFERER setting had wrong default 2022-11-14 10:28:05 +01:00
dgtlmoon
85d8d57997 Test: Re-test under HIDE_REFERER condition, use strtobool so you can use 'False' (#1121) 2022-11-12 13:57:41 +01:00
dgtlmoon
359dcb63e3 Stability fix related to the new watch check count (#1113) 2022-11-10 20:01:07 +01:00
dgtlmoon
b043d477dc Use deepcopy to stop possible data corruption (#1108) 2022-11-08 12:18:38 +01:00
dgtlmoon
06bcfb28e5 Code- Use dict .get instead of key 2022-11-07 20:43:20 +01:00
dgtlmoon
ca3b351bae Adding a check counter to watch fetching (#1099) 2022-11-06 09:48:07 +01:00
dgtlmoon
b7e0f0a5e4 Update README.md 2022-11-05 12:22:52 +01:00
dgtlmoon
61f0ac2937 HIDE_REFERER incompatible with password based login, added comment to code #996 2022-11-04 23:46:03 +01:00
dgtlmoon
fca66eb558 Update README.md 2022-11-03 14:29:38 +01:00
dgtlmoon
359fc48fb4 Filters can now accept a list/multiple filters (#1064) #623 2022-11-03 12:13:54 +01:00
dgtlmoon
d0efeb9770 0.39.21.1 2022-11-02 23:48:10 +01:00
dgtlmoon
3416532cd6 Playwright extension added back to Dockerfile to resolve conditional fix Alpine (musl) based systems (#1087) 2022-11-02 23:47:44 +01:00
dgtlmoon
defc7a340e 0.39.21 2022-11-02 15:12:33 +01:00
dgtlmoon
c197c062e1 Disable version check when pytest is running (#1084) 2022-11-01 18:26:29 +01:00
dgtlmoon
77b59809ca Removing unused code (#1070) 2022-10-28 18:36:07 +02:00
dgtlmoon
f90b170e68 Docker & python - Jq conditional pip requirements.txt include (Don't install in Windows because theres no Windows library/wheel) 2022-10-27 23:26:14 +02:00
dgtlmoon
c93ca1841c Docker & python - Use pip conditional requirements to not install playwright for ARM (unsupported on ARM) (#1067) 2022-10-27 23:17:05 +02:00
Sandro
57f604dff1 UI - Make fetch error more readable (#1038) 2022-10-27 16:40:24 +02:00
dgtlmoon
8499468749 Update README.md 2022-10-27 15:17:14 +02:00
dgtlmoon
7f6a13ea6c Re #1052 - Watch 'open' link should use any dynamic/template info (#1063) 2022-10-27 13:29:24 +02:00
dgtlmoon
9874f0cbc7 Remove accidental files 2022-10-27 12:43:02 +02:00
dgtlmoon
72834a42fd Backups and Snapshots - Data directory now fully portable, (all paths are relative) , refactored backup zip export creation 2022-10-27 12:35:26 +02:00
dgtlmoon
724cb17224 Re #1052 - Dynamic URLs, use variables in the URL (such as the current date, the date in a month, and other logic see https://github.com/dgtlmoon/changedetection.io/wiki/Handling-variables-in-the-watched-URL ) (#1057) 2022-10-24 23:20:39 +02:00
dgtlmoon
4eb4b401a1 API - system info - allow 5 minutes grace before watch is considered 'overdue' 2022-10-23 23:12:28 +02:00
dgtlmoon
5d40e16c73 API - Adding basic system info/system state API (#1051) 2022-10-23 19:15:11 +02:00
dgtlmoon
492bbce6b6 Build - Fix syntax in container build test (#1050) 2022-10-23 16:02:13 +02:00
dgtlmoon
0394a56be5 Building - Test container build on PR 2022-10-23 15:54:19 +02:00
Entepotenz
7839551d6b Testing - Use same version of playwright while running tests as in production builds (#1047) 2022-10-23 11:26:32 +02:00
Entepotenz
9c5588c791 update path for validation in the CONTRIBUTING.md (#1046) 2022-10-23 11:25:29 +02:00
dgtlmoon
5a43a350de History index safety check - Be sure that only valid history index lines are read (#1042) 2022-10-19 22:41:13 +02:00
Michael McMillan
3c31f023ce Option to Hide the Referer header from monitored websites. (#996) 2022-10-18 09:16:22 +02:00
dgtlmoon
4cbcc59461 0.39.20.4 2022-10-17 18:36:47 +02:00
dgtlmoon
4be0260381 Better cross platform file handling in diff and preview (#1034) 2022-10-17 18:36:22 +02:00
dgtlmoon
957a3c1c16 0.39.20.3 2022-10-17 17:43:35 +02:00
dgtlmoon
85897e0bf9 Windows - diff file handling improvements (#1031) 2022-10-17 17:40:28 +02:00
dgtlmoon
63095f70ea Also include tests in pip build 2022-10-17 17:13:15 +02:00
dgtlmoon
8d5b0b5576 Update README.md 2022-10-12 10:51:39 +02:00
dgtlmoon
1b077abd93 0.39.20.2 2022-10-12 09:53:59 +02:00
dgtlmoon
32ea1a8721 Windows - JQ - Make library optional so it doesnt break Windows pip installs (#1009) 2022-10-12 09:53:16 +02:00
dgtlmoon
fff32cef0d Adding test - Test the 'execute JS before changedetection' (#1006) 2022-10-11 14:40:36 +02:00
dgtlmoon
8fb146f3e4 0.39.20.1 2022-10-09 23:05:35 +02:00
dgtlmoon
770b0faa45 Code - check containers build when Dockerfile or requirements.txt changes (#1005) 2022-10-09 22:58:01 +02:00
dgtlmoon
f6faa90340 Adding make to Dockerfile build as required by jq for ARM devices 2022-10-09 22:29:18 +02:00
dgtlmoon
669fd3ae0b Dont use default Requests user-agent and accept headers in playwright+selenium requests, breaks sites such as united.com. (#1004) 2022-10-09 18:25:36 +02:00
dgtlmoon
17d37fb626 0.39.20 2022-10-09 16:13:32 +02:00
Yusef Ouda
dfa7fc3a81 Adds support for jq JSON path querying engine (#1001) 2022-10-09 16:12:45 +02:00
dgtlmoon
cd467df97a Adding link to BrightData Proxy info (#1003) 2022-10-09 15:51:57 +02:00
dgtlmoon
71bc2fed82 Remove quotationspage default watch 2022-10-09 14:06:07 +02:00
Hmmbob
738fcfe01c Notification library: Bump apprise to 1.1.0 (signal, opsgenie, pagerduty, bark and mailto fixes, adds support for BulkSMS and SMSEagle) (#1002) 2022-10-09 11:42:51 +02:00
dgtlmoon
3ebb2ab9ba Selenium fetcher - screenshot should be taken after 'wait' time, not before #873 2022-09-25 11:05:07 +02:00
dgtlmoon
ac98bc9144 Upgrade Playwright to 1.26 2022-09-24 23:51:26 +02:00
dgtlmoon
3705ce6681 Render Extract Configurable Delay Seconds should also apply after executing any JS #958 2022-09-24 23:48:03 +02:00
dgtlmoon
f7ea99412f Re #958 - remove change screensize, should be in 1280x720 default, was causing "Unable to retrieve content because the page is navigating and changing the content." on some sites 2022-09-19 14:02:32 +02:00
dgtlmoon
d4715e2bc8 Tidy up proxies.json logic, adding tests (#955) 2022-09-19 13:14:35 +02:00
dgtlmoon
8567a83c47 Update README.md - Include BrightData suggestion 2022-09-16 13:21:01 +02:00
dgtlmoon
77fdf59ae3 Improve Proxy minimum time debug output 2022-09-15 17:17:07 +02:00
dgtlmoon
0e194aa4b4 Default proxy settings fixes 2022-09-15 16:58:23 +02:00
dgtlmoon
2ba55bb477 Use proxies.json instead of proxies.txt - see wiki Proxies section (#945) 2022-09-15 15:25:23 +02:00
dgtlmoon
4c759490da Upgrade Playwright to 1.25 2022-09-15 15:10:40 +02:00
dgtlmoon
58a52c1f60 Update README.md 2022-09-13 15:29:05 +02:00
dgtlmoon
22638399c1 0.39.19.1 2022-09-11 09:23:43 +02:00
dgtlmoon
e3381776f2 Notification - code tidyup 2022-09-11 09:08:13 +02:00
dgtlmoon
26e2f21a80 Watch list & notification - Adding extra list batch operations for Mute, Unmute, Reset-to-default 2022-09-10 15:29:39 +02:00
dgtlmoon
b6009ae9ff Notification - Reset defaults button should be on edit page only 2022-09-10 15:19:18 +02:00
dgtlmoon
b046d6ef32 Notification watch settings - add button to make watch use defaults (empties the settings) 2022-09-10 15:11:31 +02:00
dgtlmoon
e154a3cb7a Notification system update - set watch to use defaults if it is the same as the default 2022-09-10 15:01:11 +02:00
Jason Nader
1262700263 Fix typo (#924) 2022-09-09 12:08:01 +02:00
dgtlmoon
434c5813b9 0.39.19 2022-09-08 20:16:35 +02:00
dgtlmoon
0a3dc7d77b Update README.md 2022-09-08 20:15:23 +02:00
dgtlmoon
a7e296de65 Tweaks to python PIP readme 2022-09-08 17:53:58 +02:00
dgtlmoon
bd0fbaaf27 Use play and pause separate icons (#919) 2022-09-08 17:50:45 +02:00
dgtlmoon
0c111bd9ae Further notification settings refinement (#910) 2022-09-08 09:10:04 +02:00
dgtlmoon
ed9ac0b7fb Reliability improvement - Check watch UUID exists when reporting missing path (#915) 2022-09-07 23:04:35 +02:00
dgtlmoon
743a3069bb repair pip readme 2022-09-04 15:23:32 +02:00
dgtlmoon
fefc39427b Test improvement - Visual selector data loads as JSON (#895) 2022-08-31 16:32:50 +02:00
dgtlmoon
2c6faa7c4e Cleaner separation of watch/global notification settings (#894) 2022-08-31 15:49:13 +02:00
dgtlmoon
6168cd2899 Code maintenance - Removing old function (#875) 2022-08-31 15:23:10 +02:00
dgtlmoon
f3c7c969d8 Show screenshot age in [preview] 2022-08-25 11:18:00 +02:00
dgtlmoon
1355c2a245 Update README.md 2022-08-25 11:00:20 +02:00
dgtlmoon
96cf1a06df Update README.md 2022-08-24 23:26:55 +02:00
dgtlmoon
019a4a0375 Update README.md 2022-08-24 09:52:11 +02:00
dgtlmoon
db2f7b80ea Update bug_report.md 2022-08-20 15:30:53 +02:00
dgtlmoon
bfabd7b094 Update bug_report.md 2022-08-20 15:28:01 +02:00
dgtlmoon
d92dbfe765 Update README.md 2022-08-20 00:39:37 +02:00
dgtlmoon
67d2441334 0.39.18 2022-08-19 11:37:26 +02:00
dgtlmoon
3c30bc02d5 More data saving pre-checks (#863) 2022-08-18 23:25:23 +02:00
dgtlmoon
dcb54117d5 Update screenshot 2022-08-18 15:29:34 +02:00
dgtlmoon
b1e32275dc Checkbox operations - reorder buttons for safety 2022-08-18 15:10:05 +02:00
dgtlmoon
e2a6865932 UI feature - Basic checkbox/group operations (#861) 2022-08-18 14:48:21 +02:00
dgtlmoon
f04adb7202 Bug fix - automatically queued watch checks weren't always being processed sequentially 2022-08-18 13:41:28 +02:00
dgtlmoon
1193a7f22c Playwright - Support proxy auth mechanisms (#859) 2022-08-18 09:46:28 +02:00
dgtlmoon
0b976827bb Update README.md 2022-08-17 22:11:04 +02:00
dgtlmoon
280e916033 Update README.md 2022-08-17 22:00:46 +02:00
dgtlmoon
5494e61a05 Skip processing when watch was deleted 2022-08-17 13:29:32 +02:00
dgtlmoon
e461c0b819 Playwright fetcher didn't report low level HTTP errors correctly (like Connection Refused) (#852) 2022-08-17 13:25:08 +02:00
dgtlmoon
d67c654f37 Be sure visual-selector data is set when xPath/CSS filter is not yet found (#851) 2022-08-17 13:21:06 +02:00
dgtlmoon
06ab34b6af Visual selector data not being saved by refactor 2022-08-16 16:53:15 +02:00
dgtlmoon
ba8676c4ba 'Save chrome screenshot' checkbox never used, removing, we always save the screenshot. (#844) 2022-08-16 16:18:09 +02:00
dgtlmoon
4899c1a4f9 Crash fix: Data store sub-directories werent always being created when needed (#842) 2022-08-16 15:17:36 +02:00
dgtlmoon
9bff1582f7 Make the table header easier to understand when sorting (#840) 2022-08-16 12:49:08 +02:00
dgtlmoon
269e3bb7c5 Column sorting (#838) 2022-08-16 10:45:36 +02:00
dgtlmoon
9976f3f969 Update README.md 2022-08-15 22:27:45 +02:00
dgtlmoon
1f250aa868 Revert "don't process paused entries after queue", so we can still manually recheck a paused watch 2022-08-15 22:19:17 +02:00
dgtlmoon
1c08d9f150 Remove 'last-changed' from url-watches.json and always calculate from history index (#835) 2022-08-15 21:14:18 +02:00
dgtlmoon
9942107016 Massive improvements to error handling - show separate output for non HTTP 200 status replies 2022-08-15 18:56:53 +02:00
dgtlmoon
1eb5726cbf Execute JS should happen after waiting seconds 2022-08-15 11:27:04 +02:00
dgtlmoon
b3271ff7bb Upgrade playwright python driver (#834) 2022-08-14 19:53:42 +02:00
dgtlmoon
f82d3b648a Crash protection - handle the case where watch was deleted while being checked (#833) 2022-08-14 19:13:45 +02:00
dgtlmoon
034b1330d4 Don't process a watch if it was paused after being queued (#825) 2022-08-09 10:48:18 +02:00
Hmmbob
a7d005109f Notification Library Update (fixes for Home Assistant) - update requirements.txt (#818) 2022-08-08 08:48:38 +02:00
dgtlmoon
048c355e04 Remove social links for now 2022-08-07 19:24:27 +02:00
dgtlmoon
4026575b0b 0.39.17.2 2022-08-05 15:53:09 +02:00
dgtlmoon
8c466b4826 Test fix - Remove debug from test 2022-08-05 08:26:37 +02:00
dgtlmoon
6f072b42e8 Security update - Password could be unset from settings form unexpectedly (#808) 2022-08-05 00:05:43 +02:00
dgtlmoon
e318253f31 Disable SIGCHLD Handler for now - keeping SIGTERM for DB writes 2022-08-04 23:48:36 +02:00
dgtlmoon
f0f2fe94ce Handle SIGTERM for cleaner shutdowns (#737) 2022-08-02 10:21:25 +02:00
dgtlmoon
26f5c56ba4 Remove [save & preview] button, the preview is not updated live so it can lead to confusion (#801) 2022-08-01 14:47:00 +02:00
dgtlmoon
a1c3107cd6 Feature - priority queue - edited and added watches should get checked before automatically queued watches (#799) 2022-07-31 15:35:35 +02:00
dgtlmoon
8fef3ff4ab [preview current] cleanup code and add test 2022-07-30 20:11:56 +02:00
dgtlmoon
baa25c9f9e Feature - mute notifications (#791) 2022-07-29 21:09:55 +02:00
dgtlmoon
488699b7d4 Test improvement - remove unnecessary step 2022-07-29 10:23:59 +02:00
dgtlmoon
cf3a1ee3e3 0.39.17.1 2022-07-29 10:13:29 +02:00
dgtlmoon
daae43e9f9 Bug fix: Filter failure detection notification was interfering with change-detection results, added test case (#786) 2022-07-29 10:11:49 +02:00
dgtlmoon
cdeedaa65c README.md - new Discord invite link 2022-07-28 23:07:07 +02:00
dgtlmoon
3c9d2ded38 0.39.17 2022-07-28 13:07:51 +02:00
dgtlmoon
9f4364a130 Add https://discord.com/api notification hook to the automatic truncation due to Discords 2000 char limit 2022-07-28 12:34:55 +02:00
dgtlmoon
5bd9eaf99d UI Feature - Add watch in "paused" state, saving then unpauses (#779) 2022-07-28 12:13:26 +02:00
dgtlmoon
b1c51c0a65 Enhancement - support xPath text() function filter, for example "//title/text()" in RSS feeds (#778) 2022-07-28 11:50:31 +02:00
dgtlmoon
232bd92389 Bug fix - Filter "Only trigger when new lines appear" should check all history, not only the first item (#777) 2022-07-28 10:16:19 +02:00
dgtlmoon
e6173357a9 Visual Selector direct element finder fix 2022-07-28 09:19:10 +02:00
dgtlmoon
f2b8888aff Update README.md 2022-07-27 14:25:24 +02:00
dgtlmoon
9c46f175f9 Update README.md links 2022-07-27 14:23:18 +02:00
dgtlmoon
1f27865fdf Filter failure notification send default enable now controlled by setting Env var 2022-07-27 00:01:51 +02:00
dgtlmoon
faa42d75e0 Refactor of extract text filter - Regex, support Regex (groups) and all python regex flags via /something/aiLmsux (#773) 2022-07-26 17:34:34 +02:00
dgtlmoon
3b6e6d85bb Update README.md - adding LinkedIn link 2022-07-25 00:28:41 +02:00
dgtlmoon
30d6a272ce Update README.md - Adding Discord and YouTube links 2022-07-24 23:06:42 +02:00
dgtlmoon
291700554e Bug fix for alerting when xPath based filters are no longer present (#772) 2022-07-23 19:39:52 +02:00
dgtlmoon
a82fad7059 Send notification when CSS/xPath filter is missing after more than 6 (configurable) attempts (#771) 2022-07-23 17:19:00 +02:00
dgtlmoon
c2fe5ae0d1 mailto plaintext handling fix for 'plaintext' apprise integration 2022-07-23 16:55:31 +02:00
dgtlmoon
5beefdb7cc Minor code cleanups 2022-07-23 13:18:44 +02:00
dgtlmoon
872bbba71c Notifications - email - Correctly send plaintext notification email with plaintext header (#767) 2022-07-21 15:22:20 +02:00
Jonathon Sisson
d578de1a35 Form text tweak - Regex clarification (#766) 2022-07-21 10:05:59 +02:00
dgtlmoon
cdc104be10 Update README.md 2022-07-20 14:37:45 +02:00
dgtlmoon
dd0eeca056 Handle simple obfuscations - HomeDepot.com style price obfuscation (#764) 2022-07-20 14:02:22 +02:00
dgtlmoon
a95468be08 Fixing docker-compose.yml PLAYWRIGHT_DRIVER_URL example URL 2022-07-15 20:45:29 +02:00
Brandon Wees
ace44d0e00 Notifications fix - Discord - added discord webhook base url to truncation rules (#753)
Co-authored-by: bwees <branonwees@gmail.com>
2022-07-14 17:41:12 +02:00
dgtlmoon
ebb8b88621 Update Playwright URI Env example with stealth setting and CORS workaround (more reliable fetching) 2022-07-12 22:36:22 +02:00
dgtlmoon
12fc2200de remove extra file 2022-07-12 22:32:20 +02:00
dgtlmoon
52d3d375ba removing package-lock.json - not required to be in git 2022-07-10 20:29:11 +02:00
dgtlmoon
08117089e6 Share-icon cleanups 2022-07-10 20:24:49 +02:00
dgtlmoon
2ba3a6d53f Test improvement: Extract text should return all matches 2022-07-10 20:05:48 +02:00
dgtlmoon
2f636553a9 Bug fix: RSS Feed should also announce utf-8 charset 2022-07-10 18:50:21 +02:00
Freddie Leeman
0bde48b282 Regex extract filter: Return all regex results instead of first match (#730) 2022-07-10 15:09:10 +02:00
dgtlmoon
fae1164c0b Ability to specify JS before running change-detection (#744) 2022-07-10 13:56:01 +02:00
dgtlmoon
169c293143 Playwright - log console errors to output 2022-07-10 13:55:29 +02:00
dgtlmoon
46cb5cff66 UI Improvement - Clarifying "Visual Filter" tool as "Visual Selector Filter" 2022-07-10 12:51:12 +02:00
Simo Elalj
05584ea886 Use environment variables to override new watch settings defaults (user-agent, timeout, workers) (#742) 2022-07-08 20:50:04 +02:00
marvin8
176a591357 Update docker-compose.yml - Remove duplicate environment variables from playwright-chrome sample config in docker-compose.yml (#738) 2022-07-06 09:03:10 +02:00
dgtlmoon
15569f9592 0.39.16 2022-07-05 16:14:57 +02:00
dgtlmoon
5f9e475fe0 Fix notification apprise application name to changedetection.io #731 2022-06-30 23:11:03 +02:00
dgtlmoon
34b8784f50 Update README.md 2022-06-30 16:16:58 +02:00
dgtlmoon
2b054ced8c [new filter] Filter option - Trigger only when NEW content (lines) are detected ( compared to earlier text snapshots ) (#685) 2022-06-28 18:34:32 +02:00
dgtlmoon
6553980cd5 Playwright - Use HTTP Request Headers override (Cookie, etc) 2022-06-25 23:42:48 +02:00
jtagcat
7c12c47204 lang: prefer 'clear (snap) history' to 'scrub' (#721) 2022-06-25 13:43:57 +02:00
dgtlmoon
dbd9b470d7 Minor diff page improvements - list should be sorted 'newest first' and no need to include the current version to compare against (#716) 2022-06-23 10:16:05 +02:00
dgtlmoon
83555a9991 bug fix: last_changed was being set on the first fetch, should only be set on the change after the first fetch #705 2022-06-23 09:41:55 +02:00
dgtlmoon
5bfdb28bd2 Update README.md 2022-06-16 11:02:22 +02:00
dgtlmoon
31a6a6717b Improve docker-compose.yml browserless docker container example, add env var for STEALTH and BLOCK_ADS (#701) 2022-06-15 23:50:48 +02:00
dgtlmoon
7da32f9ac3 New filter - Block change-detection if text matches - for example, block change-detection while the text "out of stock" is on the page, know when the text is no longer on the page (#698) 2022-06-15 22:59:37 +02:00
dgtlmoon
bb732d3d2e Docker containers - :latest is now stable release, :dev is now master/nightly 2022-06-15 22:59:21 +02:00
dgtlmoon
485e55f9ed Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-06-15 19:12:34 +02:00
dgtlmoon
601a20ea49 Trigger filters improvement- it's possible some changes weren't getting detected because the previous checksum only recorded when an event occurred (#697) 2022-06-15 19:11:20 +02:00
dgtlmoon
76996b9eb8 Some changes werent getting triggered because the previous checksum only recorded when an event occured 2022-06-15 17:18:46 +02:00
dgtlmoon
fba2b1a39d Notifications regression bug in 0.39.15 - only sent the first notification URL 2022-06-15 17:05:03 +02:00
dgtlmoon
4a91505af5 Playwright screenshots - no need for high-res "bug workaround" screenshot, use lower quality/faster configurable image quality env var 2022-06-15 10:52:24 +02:00
dgtlmoon
4841c79b4c Adding extra check when updating DB on ReplyWithContentButNoText 2022-06-14 19:54:35 +02:00
dgtlmoon
2ba00d2e1d Notifications log - log what was sent after applying all cleanups 2022-06-14 17:01:13 +02:00
dgtlmoon
19c96f4bdd Re #555 - tgram:// notifications - strip added HTML tag which is not supported by Telegram 2022-06-14 12:00:21 +02:00
dgtlmoon
82b900fbf4 Give more helpful error message when a page doesnt load 2022-06-14 08:16:22 +02:00
dgtlmoon
358a365303 Tweaks to playwright fetch code - better timeout handling 2022-06-13 23:39:43 +02:00
dgtlmoon
a07ca4b136 Re #580 - New functionality - Random "jitter" delay to requests (#681) 2022-06-13 12:41:53 +02:00
dgtlmoon
ba8cf2c8cf 0.39.15 2022-06-12 14:05:34 +02:00
dgtlmoon
3106b6688e Watch overview list - adding spinner to make it easier to see whats currently being 'Checked' 2022-06-12 12:52:17 +02:00
dgtlmoon
2c83845dac Preview section - add helpful check 2022-06-12 11:10:06 +02:00
dgtlmoon
111266d6fa Send test notification - improved handling of errors 2022-06-12 10:47:00 +02:00
dgtlmoon
ead610151f Notification log - also log normal requests and make the log easier to find 2022-06-11 23:07:09 +02:00
dgtlmoon
7e1e763989 Update bug_report.md 2022-06-11 00:43:28 +02:00
dgtlmoon
327cc4af34 Use correct RSS CDATA handling (#662) 2022-06-08 18:40:01 +02:00
dgtlmoon
6008ff516e Improve logging (#671) 2022-06-08 18:32:41 +02:00
dgtlmoon
cdcf4b353f New [scrub] button when editing a watch - scrub single watch history (#672) 2022-06-08 18:32:25 +02:00
dgtlmoon
1ab70f8e86 Diff + Preview - Hide date selector widget when viewing screenshots as its not yet possible to compare screenshots (but will be soon!) 2022-06-07 19:53:13 +02:00
dgtlmoon
8227c012a7 Diff + Preview - Fixing screenshot behaviour after preference change 2022-06-07 19:51:17 +02:00
dgtlmoon
c113d5fb24 Screenshot handling on the diff/preview section refactor (#630) 2022-06-07 19:22:42 +02:00
dgtlmoon
8c8d4066d7 Shared watches - include "Extract text" filter 2022-06-07 17:06:05 +02:00
dgtlmoon
277dc9e1c1 Improve error message when filter not found in page result (#666) 2022-06-07 16:43:57 +02:00
dgtlmoon
fc0fd1ce9d "Extract text" filter - improve placeholder example 2022-06-06 18:26:47 +02:00
dgtlmoon
bd6127728a Visual selector - 'clear selection' button should clear the filter also 2022-06-06 17:07:29 +02:00
dgtlmoon
4101ae00c6 New feature - "Extract text" filter ability (#624) 2022-06-06 16:57:50 +02:00
dgtlmoon
62f14df3cb Fixing RSS feed HTML content formatting (#662) 2022-06-06 10:24:39 +02:00
Fuzzy
560d465c59 Update notification library - Improving telegram support 2022-06-06 10:07:50 +02:00
dgtlmoon
7929aeddfc 'Mark all viewed' button was missing in this version, added test also. (#652) 2022-06-02 10:01:03 +02:00
dgtlmoon
8294519f43 Content fetcher - Handle when a page doesnt load properly 2022-06-01 13:12:37 +02:00
dgtlmoon
8ba8a220b6 Playwright - Correctly close browser context/sessions on exceptions 2022-06-01 12:59:44 +02:00
dgtlmoon
aa3c8a9370 Move history data to a textfile, improves memory handling (#638) 2022-05-31 23:43:50 +02:00
dgtlmoon
dbb5468cdc Update feature_request.md 2022-05-31 22:07:22 +02:00
dgtlmoon
329c7620fb Remove UK Covid news 2022-05-31 22:04:35 +02:00
Amos (LFlare) Ng
1f974bfbb0 Visual Selector fix: Firefox compatibility - Visual Selector (#646) 2022-05-31 09:04:01 +02:00
Tim Loderhose
437c8525af Remove group tag arbitrary length limit (#645) 2022-05-30 18:28:53 +02:00
dgtlmoon
a2a1d5ae90 Distill.io import bug fix when no tags assigned to a watch (#557) 2022-05-29 22:04:23 +02:00
dgtlmoon
2566de2aae Ignore whitespace on by default 2022-05-28 13:30:57 +02:00
dgtlmoon
dfec8dbb39 Visual Selector - clear events when changing tabs 2022-05-25 15:47:30 +02:00
dgtlmoon
5cefb16e52 Minor code cleanup 2022-05-25 15:38:40 +02:00
dgtlmoon
341ae24b73 Re #616 - content trigger - adding extra test (#620) 2022-05-25 15:31:59 +02:00
dgtlmoon
f47c2fb7f6 README.md update Visual Selector tool - tidy up screenshots, improve text 2022-05-25 11:44:59 +02:00
dgtlmoon
9d742446ab Playwright - ByPass CSP for more reliable JS scraping, disable accept downloads 2022-05-25 11:05:18 +02:00
dgtlmoon
e3e022b0f4 VisualSelector - Better handling of filter targets that are no longer available in the HTML 2022-05-25 10:23:43 +02:00
dgtlmoon
6de4027c27 Update bug_report.md 2022-05-24 14:13:11 +02:00
dgtlmoon
cda3837355 pip build fix - include API module 2022-05-24 00:16:50 +02:00
dgtlmoon
7983675325 Visual Selector - be more resilient when sites interfere with the xPath scraping 2022-05-24 00:10:38 +02:00
dgtlmoon
eef56e52c6 Adding new Visual Selector for choosing the area of the webpage to monitor - playwright/browserless only (#566) 2022-05-23 23:44:51 +02:00
dgtlmoon
8e3195f394 0.39.14 2022-05-23 14:40:26 +02:00
dgtlmoon
e17c2121f7 Fix encoding errors with XPath filters from UTF-8 responses (#619) 2022-05-20 18:07:08 +02:00
dgtlmoon
07e279b38d API Interface (#617) 2022-05-20 16:27:51 +02:00
dgtlmoon
2c834cfe37 Add note that changedetection is not performed on the screenshot just yet (WIP https://github.com/dgtlmoon/changedetection.io/pull/419 ) 2022-05-20 12:52:41 +02:00
dgtlmoon
dbb5c666f0 Fixing edit template HTML 2022-05-18 14:09:39 +02:00
dgtlmoon
70b3493866 Proxy settings on watch should have a "[ ] default" option (#610) 2022-05-18 13:59:54 +02:00
dgtlmoon
3b11c474d1 Input field tidyup (#611) 2022-05-18 13:59:17 +02:00
dgtlmoon
890e1e6dcd Update wiki link for 'More info' about sharing a watch and its configuration 2022-05-17 22:44:36 +02:00
dgtlmoon
6734fb91a2 Option to control if pages with no renderable content are a change (example: JS webapps that dont render any text sometimes) (#608) 2022-05-17 22:22:00 +02:00
dgtlmoon
16809b48f8 Playwright - raise EmptyReply on empty reply, no need to process further 2022-05-17 18:40:15 +02:00
dgtlmoon
67c833d2bc Re #214 - configurable wait extra seconds for webdriver requests before extracting text (#606) 2022-05-17 18:35:33 +02:00
weeix
31fea55ee4 Fix PLAYWRIGHT_DRIVER_URL default value (cf. #587) (#599) 2022-05-14 22:34:44 +02:00
dgtlmoon
b6c50d3b1a Update PIP readme.md 2022-05-10 22:46:59 +02:00
dgtlmoon
034507f14f Fixing Pip install problem - Update MANIFEST to include model/ subdir, improving imports (#593) 2022-05-10 22:45:08 +02:00
dgtlmoon
0e385b1c22 0.39.13 2022-05-10 17:24:38 +02:00
dgtlmoon
f28c260576 Distill.io JSON export file importer (#592) 2022-05-10 17:15:41 +02:00
dgtlmoon
18f0b63b7d Ability to specify a list of proxies to choose from, always using the first one by default, See wiki (#591) 2022-05-08 20:35:36 +02:00
Thilo-Alexander Ginkel
97045e7a7b Improving Playwright docs (#588) 2022-05-07 22:23:17 +02:00
dgtlmoon
9807cf0cda Playwright - code fix 2022-05-07 17:29:59 +02:00
dgtlmoon
d4b5237103 Playwright fetcher - more reliable by just waiting arbitrary seconds after the last network IO 2022-05-07 17:14:40 +02:00
dgtlmoon
dc6f76ba64 Make proxy configuration more consistent - see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration (#585) 2022-05-07 16:37:56 +02:00
dgtlmoon
1f2f93184e Playwright fetcher - use the correct default User-Agent 2022-05-06 23:59:38 +02:00
dgtlmoon
0f08c8dda3 Toggle visibility of extra requests options/settings when not in use (#584) 2022-05-06 23:40:32 +02:00
dgtlmoon
68db20168e Add new fetch method: Playwright Chromium (Selenium/WebDriver alternative) (#489) 2022-05-02 21:40:40 +02:00
dgtlmoon
1d4474f5a3 Simplify scrub operation (simply cleans all) (#575) 2022-05-02 21:10:23 +02:00
dgtlmoon
613308881c Bugfix - dont update record when deleted during check 2022-05-01 21:41:29 +02:00
dgtlmoon
f69585b276 Improving support info in README.md 2022-04-29 20:26:02 +02:00
dgtlmoon
0179940df1 Handle deletions better (#570) 2022-04-29 19:12:33 +02:00
dgtlmoon
c0d0424e7e Data storage bug fix #569 2022-04-29 18:26:15 +02:00
dgtlmoon
014dc61222 Upgrade notifications library - fixing marketup in email subject 2022-04-29 09:39:40 +02:00
dgtlmoon
06517bfd22 Ability to 'Share' a watch by a generated link, this will include all filters and triggers - see Wiki (#563) 2022-04-26 10:52:08 +02:00
dgtlmoon
b3a115dd4a Upgrade notifications library Re #555 - fixing telegram HTML markup in notification title 2022-04-25 23:12:32 +02:00
dgtlmoon
ffc4215411 Unify MINIMUM_SECONDS_RECHECK_TIME env var variable to 60 seconds 2022-04-24 20:37:30 +02:00
dgtlmoon
9e708810d1 Seconds/minutes/hours/days between checks form field upgrade from 'minutes' only (#512) 2022-04-24 16:56:32 +02:00
dgtlmoon
1e8aa6158b Form styling improvements 2022-04-24 14:40:53 +02:00
dgtlmoon
015353eccc Form field handling improvements - fixing field list handler for empty lines 2022-04-24 13:53:13 +02:00
dgtlmoon
501183e66b Fix "Add email" button in main global notification settings 2022-04-22 10:51:52 +02:00
dgtlmoon
def74f27e6 Test notification button fixed in main settings (#556) 2022-04-21 21:36:10 +02:00
dgtlmoon
37775a46c6 tgram:// be sure total notification size is always under their 4096 size limit 2022-04-21 16:28:15 +02:00
dgtlmoon
e4eaa0c817 Shows which items are already in the queue, disables adding to the queue if already in the recheck queue (#552) 2022-04-21 12:52:45 +02:00
dgtlmoon
206ded4201 Notifications - Signal API support, Ntfy support, hotmail, matrix, Gotify API fixes 2022-04-20 23:13:55 +02:00
dgtlmoon
9e71f2aa35 Discord:// notification size limit - also includes the notification title 2022-04-20 17:00:21 +02:00
dgtlmoon
f9594aeffb Fix spelling errors 2022-04-20 09:51:53 +02:00
dgtlmoon
b4e1353376 Update README.md 2022-04-19 23:56:11 +02:00
dgtlmoon
5b670c38d3 Update README.md 2022-04-19 23:48:21 +02:00
dgtlmoon
2a9fb12451 Import speed improvements, and adding an import URL batch size of 5,000 to stop accidental CPU overload (#549) 2022-04-19 23:15:32 +02:00
dgtlmoon
6c3c5dc28a Ability to set the default fetch mode via the DEFAULT_FETCH_BACKEND variable 2022-04-19 23:15:00 +02:00
dgtlmoon
8f062bfec9 Refactor form handling (#548) 2022-04-19 21:43:07 +02:00
dgtlmoon
380c512cc2 Adding support for change detection of HTML source-code via "source:https://website.com" prefix (#540) 2022-04-12 17:36:29 +02:00
dgtlmoon
d7ed7c44ed Re-label the quick-add widget placeholder 'tag' to 'watch group' 2022-04-12 10:55:43 +02:00
dgtlmoon
34a87c0f41 HTTP Fetcher code improvements 2022-04-12 08:36:08 +02:00
dgtlmoon
4074fe53f1 Adding RSS metadata auto-discovery 2022-04-12 07:35:47 +02:00
Tristan Hill
44d599d0d1 Upgrade WTforms form handler to v3 (#523) 2022-04-09 19:50:56 +02:00
dgtlmoon
615fe9290a 0.39.12 2022-04-09 14:16:30 +02:00
dgtlmoon
2cc6955bc3 Miscellaneous settings form visual improvements (#535) 2022-04-09 12:15:34 +02:00
dgtlmoon
9809af142d Option to render links as [Some Text ](/link), adds the ability to change-detect on hyperlink changes 2022-04-09 10:35:14 +02:00
dgtlmoon
1890881977 Specify our Discord avatar_url as default avatar_url 2022-04-08 18:35:59 +02:00
dgtlmoon
9fc2fe85d5 Minor git updates 2022-04-08 17:21:42 +02:00
dgtlmoon
bb3c546838 Fix screenshot tab name 2022-04-08 17:08:06 +02:00
dgtlmoon
165f794595 Discord:// notifications should be cut to 2000 chars or Discord will not process them. (#531 + #323) 2022-04-08 16:32:04 +02:00
dgtlmoon
a440eece9e Make long reports in the notification error log easier to read 2022-04-08 14:12:42 +02:00
dgtlmoon
34c83f0e7c [Add email] button in notification settings with a prefix set from NOTIFICATION_MAIL_BUTTON_PREFIX env variable when defined. (#528) 2022-04-07 18:18:23 +02:00
dgtlmoon
f6e518497a Update README.md 2022-04-07 14:59:31 +02:00
dgtlmoon
63e91a3d66 Skip processing a watch into the RSS feed if there's not enough data to examine (fixes Internal Server Error when accessing the RSS feed) (#521) 2022-04-05 20:31:31 +02:00
dgtlmoon
3034d047c2 Introduce an AJAX button for sending test notifications instead of the checkbox (#519) 2022-04-05 18:04:26 +02:00
dgtlmoon
2620818ba7 Make text tab always available at default 2022-04-02 14:55:40 +02:00
dgtlmoon
9fe4f95990 When fetching a snapshot via Chrome, make the most recent screenshot available on the Diff and Preview pages (#516) 2022-04-02 14:49:32 +02:00
dgtlmoon
ffd2a89d60 Remove 'unviewed' status in watch table when Diff link clicked (#514) 2022-03-31 11:01:07 +02:00
dgtlmoon
8f40f19328 RSS feed CDATA should contain difference output 2022-03-30 10:51:10 +02:00
dgtlmoon
082634f851 Fix - {diff} and {diff_full} notifications tokens were not always including the full output 2022-03-29 19:18:26 +02:00
dgtlmoon
334010025f Update README.md 2022-03-26 14:02:56 +01:00
dgtlmoon
81aa8fa16b Update README.md 2022-03-26 09:56:56 +01:00
dgtlmoon
c79d6824e3 Minor UI cleanups (mobile tabs, font sizing) (#503) 2022-03-25 23:37:28 +01:00
zznidar
946377d2be Fix typo in Filters & Triggers settings. (#495) 2022-03-23 23:18:04 +01:00
zznidar
5db9a30ad4 Add autofocus attribute to password login field (#496) 2022-03-23 23:17:47 +01:00
dgtlmoon
1d060225e1 0.39.11 2022-03-23 09:42:51 +01:00
dgtlmoon
7e0f0d0fd8 Microsoft Windows installation fixes (#492) 2022-03-22 23:08:08 +01:00
dgtlmoon
8b2afa2220 GitHub tweak - container tags should be CSV list (Fix ghcr.io not building) 2022-03-22 00:08:05 +01:00
dgtlmoon
f55ffa0f62 GitHub tweak - build containers also on push to master 2022-03-21 23:08:17 +01:00
dgtlmoon
942c3f021f Allow changedetector to ignore status codes as a per-site setting (#479) (#485)
Co-authored-by: Ara Hayrabedian <ara.hayrabedian@gmail.com>
2022-03-21 23:03:54 +01:00
dgtlmoon
5483f5d694 Security update - Use CSRF token protection for forms, make "remove password" use HTTP Post (#484) 2022-03-21 22:54:27 +01:00
dgtlmoon
f2fa638480 Security update - Protect against file:/// type access by webdriver/chrome. (#483) 2022-03-21 20:59:20 +01:00
dgtlmoon
82d1a7f73e Only build container on GitHub releases, not tests 2022-03-20 16:57:36 +01:00
dgtlmoon
9fc291fb63 Also change container names to help stop some DNS issues 2022-03-17 19:59:37 +01:00
dgtlmoon
3e8a15456a Detect byte-encoding when the server mishandles the content-type header reply (#472) 2022-03-17 10:28:02 +01:00
dgtlmoon
2a03f3f57e Improving form/edit example markup 2022-03-13 12:00:45 +01:00
dgtlmoon
ffad5cca97 JSON diff/preview should use utf-8 encoding where possible (#465) 2022-03-13 11:37:51 +01:00
Tim Loderhose
60a9a786e0 Fix typo in settings form 2022-03-13 10:55:37 +01:00
dgtlmoon
165e950e55 Add python venv to .gitignore 2022-03-13 10:53:33 +01:00
dgtlmoon
c25294ca57 0.39.10 2022-03-12 17:28:30 +01:00
Tim Loderhose
d4359c2e67 Add filter to remove elements by CSS rule from HTML before change detection is run (#445) 2022-03-12 13:29:30 +01:00
dgtlmoon
44fc804991 Minor updates to filters form text 2022-03-12 11:20:43 +01:00
dgtlmoon
b72c9eaf62 Re #448 - Dont use changedetection.io as the container name and hostname, fix problems fetching from the real changedetection.io webserver :) 2022-03-12 08:24:51 +01:00
dgtlmoon
7ce9e4dfc2 Testing - Refactor HTTP Request Type test (#453) 2022-03-11 18:50:02 +01:00
dgtlmoon
3cc6586695 Make table header font size the same as content 2022-03-07 13:03:59 +01:00
dgtlmoon
09204cb43f Adjust background colours 2022-03-06 19:03:59 +01:00
dgtlmoon
a709122874 Handle the case where the visitor is already logged-in and tries to login again (#447) 2022-03-06 18:19:05 +01:00
dgtlmoon
efbeaf9535 Make the Request Override settings easier to understand 2022-03-06 17:23:21 +01:00
dgtlmoon
1a19fba07d Minor tweak to notification token table 2022-03-06 17:10:30 +01:00
dgtlmoon
eb9020c175 Style tweak to watch form 2022-03-06 17:05:23 +01:00
dgtlmoon
13bb44e4f8 Login form style fixes 2022-03-06 17:03:15 +01:00
dgtlmoon
47f294c23b Upgrade apprise notification engine to 0.9.7 (important telegram fixes) 2022-03-05 13:14:14 +01:00
dgtlmoon
a4cce16188 Remove pytest from production release pip requirements 2022-03-05 13:12:15 +01:00
dgtlmoon
69aec23d1d Style fix for background image relative to X-Forwarded-Prefix when running via reverse proxy subdirectory 2022-03-05 13:08:57 +01:00
dgtlmoon
f85ccffe0a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-03-04 13:13:54 +01:00
dgtlmoon
0005131472 Re-arranging primary links so the important ones are easier to find on mobile 2022-03-04 13:06:39 +01:00
dgtlmoon
3be1f4ea44 Set authentication cookie path relative to X-Forwarded-Prefix when running via reverse proxy subdirectory (#446) 2022-03-04 11:23:32 +01:00
dgtlmoon
46c72a7fb3 Upgrade inscriptis HTML converter to version 2.2~ (#434) 2022-03-01 17:58:54 +01:00
dgtlmoon
96664ffb10 Better text/plain detection and refactor tests (#443) 2022-03-01 17:50:15 +01:00
dgtlmoon
615fa2c5b2 Tweak support tabs and text (#440) 2022-02-28 22:39:32 +01:00
dgtlmoon
fd45fcce2f Include link to changedetection.io hosted option (#439) 2022-02-28 15:47:59 +01:00
dgtlmoon
75ca7ec504 Improved CPU usage around the loop responsible for what sites needs to be checked 2022-02-28 15:08:51 +01:00
dgtlmoon
8b1e9f6591 Update README.md with hosting options 2022-02-26 18:42:54 +01:00
dgtlmoon
883aa968fd 0.39.9 2022-02-24 17:02:50 +01:00
dgtlmoon
3240ed2339 Minor reliability upgrade for large datasets - retry deepcopy (#436) 2022-02-24 16:58:51 +01:00
dgtlmoon
a89ffffc76 "Recheck" button should work when entry is in paused state 2022-02-24 16:49:48 +01:00
dgtlmoon
fda93c3798 Better file exception handling on saving index JSON 2022-02-24 16:36:24 +01:00
dgtlmoon
a51c555964 Fix small issue in highlight trigger/ignore preview page with setting the background colours, add test 2022-02-23 12:30:36 +01:00
dgtlmoon
b401998030 Ensure string matching on the ignore filter is always case-INsensitive 2022-02-23 12:01:11 +01:00
dgtlmoon
014fda9058 Ability to visualise trigger and filter rules against the current snapshot on the preview page 2022-02-23 10:49:25 +01:00
dgtlmoon
dd384619e0 Update README.md 2022-02-19 13:41:54 +01:00
Michael
85715120e2 XPath RegularExpression support 2022-02-19 13:40:57 +01:00
dgtlmoon
a0e4f9b88a better checking of JSON type 2022-02-17 18:16:47 +01:00
dgtlmoon
04bef6091e Make system level errors from the HTTP fetchers easier to find (#421) 2022-02-13 23:43:45 +01:00
dependabot[bot]
536948c8c6 Bump node-sass from 6.0.1 to 7.0.0 in /changedetectionio/static/styles (#415)
Bumps [node-sass](https://github.com/sass/node-sass) from 6.0.1 to 7.0.0.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-11 09:10:55 +01:00
dgtlmoon
d4f4ab306a Dont allow redirect on login, it's safer and more reliable this way (#414) 2022-02-08 21:12:44 +01:00
dgtlmoon
8d2e240a2a When using Env. FETCH_WORKERS or WEBDRIVER_DELAY_BEFORE_CONTENT_READY , it should be type int 2022-02-08 20:01:24 +01:00
dgtlmoon
d7ed479ca2 0.39.8 2022-02-08 18:56:10 +01:00
dgtlmoon
f25cdf0a67 Number of fetching workers can be overriden by Env "FETCH_WORKERS" (#413) 2022-02-08 18:27:56 +01:00
dgtlmoon
5214a7e0f3 Adding Env var "WEBDRIVER_DELAY_BEFORE_CONTENT_READY" to wait n seconds before extracting the text from the browser 2022-02-08 18:24:25 +01:00
dgtlmoon
eb3dca3805 Language fix "watches are rechecking." it actually puts them into an internal queue "watches are QUEUED for rechecking" 2022-02-08 13:00:18 +01:00
dgtlmoon
a580c238b6 Use flask url_for() for webdriver chrome icon instead of relative path 2022-02-05 23:25:57 +01:00
Alexander Aleksandrovič Klimov
7ca89f5ec3 Fix typo in the startup create-directory command suggestion (#405) 2022-02-05 19:46:02 +01:00
Alexander Aleksandrovič Klimov
8ab8aaa6ae Introduce -h option to allow listening not on 0.0.0.0. (#406) 2022-02-05 19:29:22 +01:00
dgtlmoon
22ef9afb93 Refactor tests for notification error log handler (#404) 2022-02-04 20:54:20 +01:00
dgtlmoon
abaec224f6 Notification error log handler (#403)
* Add a notifications debug/error log interface (Link available under the notification URLs list)
2022-02-04 19:29:39 +01:00
dgtlmoon
5a645fb74d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-02-04 17:31:54 +01:00
dgtlmoon
14db60e518 Add notification note - tgram:// bots cant send messages to other bots, so you should specify chat ID of non-bot user. 2022-02-04 17:31:32 +01:00
Radu Ursache
e250c552d0 fixed the reference to wiki for rpi section (#402) 2022-02-04 10:55:30 +01:00
dgtlmoon
8e54a17e14 /preview format doesnt need <pre> - fixing too many returnlines in content on diff/preview page 2022-02-02 14:39:42 +01:00
dgtlmoon
8607eccaad Update README.md 2022-02-02 11:33:22 +01:00
dgtlmoon
17511d0d7d Update README - Fix docker section 2022-01-30 15:20:26 +01:00
dgtlmoon
41b806228c Update README - Tidy up sections 2022-01-30 15:19:21 +01:00
dgtlmoon
453cf81e1d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-30 02:15:15 +01:00
dgtlmoon
0095b28ea3 Offer instance on Lemonade
Tidy README
2022-01-30 02:14:32 +01:00
dgtlmoon
73101a47e7 Ability to use a generated salted password in deployments as env var SALTED_PASS (#397)
* Ability to use a generated salted password in deployments as env var SALTED_PASS
2022-01-29 19:36:44 +01:00
dgtlmoon
03f776ca45 #323 Adding note about discord:// 2000 char limit (#392)
* Adding note about discord:// 2000 char limit
2022-01-28 10:38:04 +01:00
dgtlmoon
39b7be9e7a plaintext mime type fix - Don't attempt to extract HTML content from plaintext, this will remove lines and break changedetection (#391) 2022-01-27 23:16:50 +01:00
dgtlmoon
6611823962 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-27 23:01:17 +01:00
dgtlmoon
c1c453e4fe .add_watch() can accept empty tag
Use https://changedetection.io/CHANGELOG.txt as a nice default page to watch
2022-01-27 23:00:39 +01:00
Tim Loderhose
4887180671 Add option for tags on import (#377)
* Add option for tags on import and backup
2022-01-25 18:46:05 +01:00
dgtlmoon
ac7378b7fb Update CONTRIBUTING.md 2022-01-24 22:09:14 +01:00
dgtlmoon
eeba8c864d Update README.md 2022-01-22 15:35:07 +01:00
Travis Howse
abe88192f4 Fix bug where diff and diff_full were switched in notification templates. (#380) 2022-01-21 12:26:08 +01:00
dgtlmoon
af8efbb6d2 Closes #378 2022-01-19 23:16:49 +01:00
dgtlmoon
bbc2875ef3 0.39.7 2022-01-15 23:21:06 +01:00
dgtlmoon
b7ca10ebac Scrub watch snapshot fixes 2022-01-15 23:18:04 +01:00
dgtlmoon
a896493797 Simple HTTP auth (#372)
HTTP Basic Auth form validation
2022-01-15 22:52:39 +01:00
dgtlmoon
e5fe095f16 Adding note about JS pages 2022-01-12 18:18:40 +01:00
dgtlmoon
271181968f Notification settings defaults and validation (#361)
* Re #360 - Validate that when a notification URL is set, we have also a notification body and title, new install should have notification title/body defaults set.
2022-01-10 17:38:04 +01:00
dgtlmoon
8206383ee5 Filters settings helper text tidy-up 2022-01-09 14:36:07 +01:00
dgtlmoon
ecfc02ba23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-09 11:45:23 +01:00
dgtlmoon
3331ccd061 Add test for low-level network error text handling 2022-01-09 11:45:04 +01:00
Unpublished
bd8f389a65 Add API endpoint for current snapshot (#359) 2022-01-08 16:38:42 +01:00
dgtlmoon
bc74227635 Clarify notice/messages around changing ignore text 2022-01-05 20:42:45 +01:00
dgtlmoon
07c60a6acc Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-05 19:13:42 +01:00
dgtlmoon
7916faf58b 0.39.6 2022-01-05 19:13:36 +01:00
dgtlmoon
febb2bbf0d Heroku tweaks (backup download) (#356)
* use absolute path, just incase the data-dir is set relative
2022-01-05 19:12:13 +01:00
dgtlmoon
59d31bf76f XPath support (#355)
* XPath support and minor improvements to form validation
2022-01-05 17:58:07 +01:00
dgtlmoon
f87f7077a6 Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error (#354)
* Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error, adds test
2022-01-05 14:13:30 +01:00
revilo951
f166ab1e30 Adding note in comments for working arm64 chrome with rPi-4 (#336) 2022-01-05 12:20:56 +01:00
Valtteri Huuskonen
55e679e973 fix typo in README.md (#350)
Fix spelling of Raspberry Pi.
2022-01-04 10:55:20 +01:00
dgtlmoon
e211ba806f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-03 20:16:51 +01:00
dgtlmoon
b33105d576 Re #348 - Add test for backup, use proper datastore path 2022-01-03 20:16:21 +01:00
dgtlmoon
b73f5a5c88 Update README.md 2022-01-03 18:46:50 +01:00
Unpublished
023951a10e Be sure that documents returned with a application/json header are not parsed with inscriptis (#337)
* Auto-detect JSON by Content-Type header
* Add test to not parse JSON responses with inscriptis
2022-01-02 22:35:33 +01:00
dgtlmoon
fbd9ecab62 Re #340 - snapshot should not be modified by ignore text (#344) 2022-01-02 22:35:04 +01:00
dgtlmoon
b5c1fce136 Re #133 Option for ignoring whitespacing (#345)
* Global setting option to ignore whitespace when detecting a change
2022-01-02 22:28:34 +01:00
dgtlmoon
489671dcca Re #342 notification encoding (#343)
* Re #342 - check for accidental python byte encoding of non-utf8/string, check return type of fetcher and fix encoding of notification content
2022-01-02 14:11:04 +01:00
dgtlmoon
d4dc3466dc Update README.md 2022-01-01 18:11:54 +01:00
dgtlmoon
0439acacbe Adding global ignore text (#339) 2022-01-01 14:53:08 +01:00
dgtlmoon
735fc2ac8e Adding new proxyType to selenium mappings 2021-12-31 10:48:11 +01:00
dgtlmoon
8a825f0055 Use selenium 4.1.0 2021-12-31 10:44:45 +01:00
dgtlmoon
d0ae8b7923 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-31 10:35:47 +01:00
dgtlmoon
a504773941 Bumping selenium version re https://github.com/dgtlmoon/changedetection.io/pull/331#issuecomment-1003323594 2021-12-31 10:35:29 +01:00
Calvin Bui
feb8e6c76c Add socksVersion mapping (#331) 2021-12-31 10:26:38 +01:00
dgtlmoon
a37a5038d8 Fix broken RSS link fields 2021-12-30 00:04:38 +01:00
dgtlmoon
f1933b786c RSS Link links you back to the difference UI/JS page, RSS Description is the page you're watching, and RSS Title is the page you're watching 2021-12-29 23:57:30 +01:00
dgtlmoon
d6a6ef2c1d Unify Filters and Triggers tabs into a single tab 2021-12-29 23:37:04 +01:00
dgtlmoon
cf9554b169 Move 'request type' field to the new 'Requests' tab 2021-12-29 23:31:53 +01:00
dgtlmoon
d602cf4646 Aligning call signatures #325 2021-12-29 23:28:34 +01:00
Simon Caron
dfcae4ee64 Extend Request Parameters to add Body & Method (#325) 2021-12-29 23:18:29 +01:00
dgtlmoon
e3bcd8c9bf Update README.md 2021-12-29 08:55:37 +01:00
dgtlmoon
c4990fa3f9 Create CONTRIBUTING.md 2021-12-28 18:59:43 +01:00
dgtlmoon
98461d813e Update README.md 2021-12-28 18:57:39 +01:00
dgtlmoon
8ec17a4c83 Re #267 - Pass settings for the proxy setup for webdriver (#326)
* Re #267 - Pass HTTP_PROXY as the proxy setup for webdriver
* Update README.md
2021-12-28 17:07:41 +01:00
dgtlmoon
ee708cc395 Update README.md 2021-12-28 13:19:24 +01:00
dgtlmoon
8a670c029a Update README.md 2021-12-28 13:18:44 +01:00
dgtlmoon
9fa5aec01e Update README.md 2021-12-28 00:47:00 +01:00
dgtlmoon
43c9cb8b0c 0.39.5 2021-12-27 23:46:29 +01:00
dgtlmoon
b6a359d55b Update feature_request.md 2021-12-27 13:50:38 +01:00
dgtlmoon
ae5a88beea Update issue templates 2021-12-27 13:49:07 +01:00
dgtlmoon
a899d338e9 Update bug_report.md 2021-12-27 13:48:02 +01:00
dgtlmoon
7975e8ec2e Update issue templates 2021-12-27 13:46:41 +01:00
dgtlmoon
ce383bcd04 W3C HTML validation issue around RSS icon 2021-12-27 10:55:43 +01:00
dgtlmoon
0b0cdb101b Closes #323 adds link to wiki 2021-12-27 10:14:40 +01:00
dgtlmoon
396509bae8 Update README.md 2021-12-22 10:43:22 +01:00
dgtlmoon
2973f40035 Update README.md 2021-12-22 10:42:48 +01:00
dgtlmoon
067fac862c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 23:17:48 +01:00
dgtlmoon
20647ea319 improve theming docs 2021-12-19 23:17:24 +01:00
dgtlmoon
fafc7fda62 Update README.md 2021-12-19 23:10:55 +01:00
dgtlmoon
b1aaf9f277 Update README.md 2021-12-19 23:04:56 +01:00
dgtlmoon
18987aeb23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 18:17:37 +01:00
dgtlmoon
856789a9ba Closes #315 - Include library apprise Notify_mqtt 2021-12-19 18:16:51 +01:00
Iván
2857c7bb77 Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites (#312)
* set SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites

* Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
2021-12-16 12:13:47 +01:00
dgtlmoon
df951637c4 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-16 11:53:39 +01:00
dgtlmoon
ba6fe076bb Go back to docker hub 2021-12-16 11:53:28 +01:00
dgtlmoon
9815fc2526 RSS allow access via token (#310)
Allow access via a token
* New RSS URL
* Redirect the old RSS feed URL
* fix tests
2021-12-16 00:05:01 +01:00
dgtlmoon
e71dbbe771 Adding deploy to Heroku button 2021-12-15 23:32:48 +01:00
dgtlmoon
bd222c99c6 Adding heroku app.json app 2021-12-15 23:28:23 +01:00
dgtlmoon
4b002ad9e0 Tweak runtime Heroku version 2021-12-15 23:20:21 +01:00
dgtlmoon
fe2ffd6356 Tweaking heroku Procfile 2021-12-15 23:20:06 +01:00
dgtlmoon
266bebb5bc Adjust buildpacks on Heroku 2021-12-15 23:15:36 +01:00
dgtlmoon
115ff5bc2e Adding heroku python3 runtime config 2021-12-15 23:13:03 +01:00
dgtlmoon
dd6a24d337 Try simpler heroku recipe 2021-12-15 23:09:43 +01:00
dgtlmoon
f0d418d58c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-15 23:07:32 +01:00
dgtlmoon
10d3b09051 -C option to create a datadir if it doesnt exist 2021-12-15 23:07:13 +01:00
dgtlmoon
35d0c74454 Re #308 - Adding test and including settings in clone operation (#309) 2021-12-15 19:54:30 +01:00
Glassed Silver
dd450b81ad fixing too small font in diff UI (#260)
* fixing too small font in diff UI , lower size from 12 to 11 in Part II
2021-12-15 19:21:25 +01:00
dgtlmoon
512d76c52b Update README.md
Make link more accurate
2021-12-10 20:21:27 +01:00
dgtlmoon
5a10acfd09 Send diff in notifications (#296) 2021-12-10 12:08:51 +01:00
dgtlmoon
a7c09c8990 Fix scrub form theme 2021-12-10 00:09:54 +01:00
dgtlmoon
9235eae608 Scrub dates: Fix date regex limit handler parsing 2021-12-10 00:09:42 +01:00
dgtlmoon
5bbd82be79 Wait 60 seconds or until stop_thread is set 2021-12-09 23:28:17 +01:00
dgtlmoon
7f8c0fb2fa Check that a notification URL is set when sending the test notification (#300) 2021-12-08 12:23:48 +01:00
Tristan Hill
489eedf34e Flask 2 (#299)
Co-authored-by: Tristan Hill <t+git@eaux.uk>
2021-12-07 23:23:23 +01:00
dgtlmoon
3956b3fd68 Re #269 - Show current/correct BASE_URL information (#271)
* Re #269 - Show current/correct BASE_URL information
2021-12-04 15:23:23 +01:00
dgtlmoon
61c1d213d0 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-04 14:48:18 +01:00
dgtlmoon
e07f573f64 Re #269 - Fix env var comment name 2021-12-04 14:47:46 +01:00
ghjklw
ecba130fdb Enable Markdown and HTML notifications. (#288)
This change enable defining the notification body as HTML or Markdown. This can be very
useful to have more user-friendly notifications such as:
* applying a heading style to the `{watch_title}` to make it stand out
* creating clickable links using the `{watch_url}`, `{preview_url}` and `{diff_url}`.

Changes
=======
* Add a `notification_format` to the notification settings, defaults to plain text.
* Use the `body_format` parameter of Apprise's `notify` method.

Co-authored-by: Malo Jaffré <malo.jaffre@dunnhumby.com>
2021-12-04 14:41:48 +01:00
dgtlmoon
ff6dc842c0 0.39.4 release 2021-12-02 22:54:38 +01:00
dgtlmoon
4659993ecf Re #286 - Solving lost data/corrupted data - Tweak timing and try to write to a temp file first (#292)
* Re #286 - Tweak timing and try to write to a temp file first, Increase logging and format info message better.
2021-12-02 22:48:44 +01:00
jeremysherriff
0a29b3a582 Fix element paths when using reverse proxy subfolder (#272) 2021-11-12 11:34:19 +01:00
dgtlmoon
c55bf418c5 0.39.3 release 2021-10-28 11:32:33 +02:00
dgtlmoon
4bbb7d99b6 Re #264 - fixing clone watch operation 2021-10-28 11:29:59 +02:00
dgtlmoon
a8e92e2226 Re #265 - extended jsonpath support (#266)
* Re #265 - Use extended JSONpath support,
Allow a JSONPath selector to not match anything (yet)
Adding test
Correctly capture invalid JSONPath query error
2021-10-27 09:24:08 +02:00
dgtlmoon
c17327633f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-26 22:32:29 +02:00
dgtlmoon
56d1dde7c3 Re #265 - wasnt catching the jsonpath exception due to invalid jsonpath expressions properly 2021-10-26 22:30:58 +02:00
dgtlmoon
6e4ddacaf8 Re #257 - Handle bool val of json path better (#263)
* Re #257 - Handle bool val of json path better, with test
2021-10-21 23:25:38 +02:00
dgtlmoon
3195ffa1c6 Re #249 - Add EXPOSE 5000 to Dockerfile 2021-10-06 22:28:35 +02:00
dgtlmoon
c749d2ee44 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-06 20:51:38 +02:00
dgtlmoon
ec94359f3c Provide better combination of chardet and urllib3 2021-10-06 20:51:05 +02:00
dgtlmoon
4d0bd58eb1 Prefer GHCR.io over DockerHub (#245)
* Prefer GHCR.io over DockerHub (DockerHub limits pulls)
2021-10-06 13:07:56 +02:00
dgtlmoon
3525f43469 Limit branches/tags of container build
Limit branch
2021-10-06 12:27:02 +02:00
dgtlmoon
d70252c1eb Re #213 - Adding screensize examples to selenium container 2021-10-06 11:34:24 +02:00
dgtlmoon
b57b94c63a Be more specific about tagged release builds 2021-10-06 11:28:39 +02:00
dgtlmoon
9e914c140e Fix :latest release worflow syntax check 2021-10-06 10:27:03 +02:00
dgtlmoon
5d5ceb2f52 Form helper - explain where the webdriver setting comes from 2021-10-06 09:27:41 +02:00
dgtlmoon
bc0303c5da Rename workflow name 2021-10-06 08:59:03 +02:00
dgtlmoon
1240da4a6e Just 'published' and 'edited' package release is enough (remove 'created') 2021-10-06 08:52:10 +02:00
dgtlmoon
4267bda853 Fixing workflow tag syntax issues 2021-10-06 08:49:33 +02:00
dgtlmoon
db1ff1843c fix broken workflow syntax 2021-10-06 08:45:05 +02:00
dgtlmoon
fe3c20b618 add step for metadata debug, see if it runs by checking workflow tag name 2021-10-06 08:42:40 +02:00
dgtlmoon
2fa93cba3a Container build/push doesnt need to be so specific 2021-10-05 22:09:12 +02:00
dgtlmoon
254fbd5a47 Oops on/release was in the wrong block 2021-10-05 19:13:45 +02:00
dgtlmoon
18f2318572 release also on edited, published 2021-10-05 19:05:09 +02:00
dgtlmoon
84417fc2b1 Run workflow on release 2021-10-05 19:02:05 +02:00
dgtlmoon
7f7fc737b3 Use a better switch mechanism for build type 2021-10-05 18:48:54 +02:00
dgtlmoon
2dc43bdfd3 version 0.39.2 2021-10-05 18:21:40 +02:00
dgtlmoon
95e39aa727 Configurable BASE_URL (#228)
Re #152 ability to over-ride env var BASE_URL, with UI+tests
2021-10-05 18:15:36 +02:00
dgtlmoon
2c71f577e0 Split python pip builder to its own release based workflow 2021-10-05 17:01:34 +02:00
dgtlmoon
f987d32c72 remove accidental syntax add 2021-10-05 17:01:26 +02:00
dgtlmoon
cd7df86f54 Re #242 - app was treating notification field defaults as the field value (#244) 2021-10-05 14:33:57 +02:00
dgtlmoon
cb8fa2583a attempt to re-enable docker layer cache 2021-10-05 11:48:09 +02:00
dgtlmoon
3d3e5db81c Forgot GHCR tag with version 2021-10-05 11:43:56 +02:00
dgtlmoon
c9860dc55e Limit container build to releases and master 2021-10-05 11:13:23 +02:00
dgtlmoon
dbd5cf117a Fix GHCR login 2021-10-05 10:47:50 +02:00
dgtlmoon
e805d6ebe3 Use the same workflow for tag and release 2021-10-05 10:40:28 +02:00
dgtlmoon
01f469d91d Drop redundant build workflow 2021-10-05 10:33:15 +02:00
dgtlmoon
e91cab0c6d try :latest and :tag in same workflow run 2021-10-05 10:28:27 +02:00
dgtlmoon
106c3269a6 Separate workflows 2021-10-05 10:16:23 +02:00
dgtlmoon
1628602860 Docker image build issues (#243)
Pin cryptography ~= 3.4, fixes build issues for multiplatform docker buildx, and a little tidy up of github workflows.
2021-10-05 10:09:10 +02:00
dgtlmoon
ca0ab50c5e Re #239 - Individual GUID for watch+changeevent (#241)
* Re #239 - Individual GUID for watch+changeevent
2021-10-04 08:34:10 +02:00
dgtlmoon
df0b7bb0fe Update README.md
Re #240 return update instructions
2021-10-03 19:25:50 +02:00
dgtlmoon
fe59ac4986 Re #232 - Use a copy of the datastore incase it changes while we iterate through it (#234) 2021-09-23 18:27:16 +02:00
dgtlmoon
25476bfcb2 Setting for Extract <title> as title option on individual watches (#229)
* Extract <title> as title option on individual items
2021-09-19 22:57:15 +02:00
dgtlmoon
6901fc493d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-18 10:33:09 +02:00
dgtlmoon
c40417ff96 GitHub repo build platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7 2021-09-18 10:32:45 +02:00
dgtlmoon
fd2d938528 GitHub container repo (#227) 2021-09-18 00:11:54 +02:00
dgtlmoon
cd20dea590 Remove extra build step 2021-09-17 23:50:58 +02:00
dgtlmoon
f921e98265 push github container master also 2021-09-17 23:44:57 +02:00
dgtlmoon
c0e905265c Tidy up workflow names 2021-09-17 23:38:50 +02:00
dgtlmoon
5e6a923c35 Attempt to setup GitHub Container Registry 2021-09-17 23:37:28 +02:00
dgtlmoon
7618081e83 v0.39.1 2021-09-17 18:40:16 +02:00
dgtlmoon
b903280cd0 Re #185 - [feature] Custom notifications templates per watch (#226)
* Re #185 - [feature] Custom text templates for the notification per monitored entry as override.
Bonus points: Adding validation for apprise URLs
2021-09-17 18:37:26 +02:00
dgtlmoon
5b60314e8b Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-17 16:03:06 +02:00
dependabot[bot]
dfd34d2a5b Bump tar from 6.1.6 to 6.1.9 in /changedetectionio/static/styles (#209)
Bumps [tar](https://github.com/npm/node-tar) from 6.1.6 to 6.1.9.
- [Release notes](https://github.com/npm/node-tar/releases)
- [Changelog](https://github.com/npm/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-tar/compare/v6.1.6...v6.1.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-17 16:02:59 +02:00
dgtlmoon
98f6f0c80d Re #225 - Notifications refactor token replacement fix possible missing value for watch_title 2021-09-17 16:02:54 +02:00
dgtlmoon
8c65c60c27 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-17 16:00:49 +02:00
dgtlmoon
bd0d9048e7 Re #42 - Notifications refactor token replacement fix possible missing value for watch_title 2021-09-17 15:58:04 +02:00
dependabot[bot]
3b14be4fef Bump tar from 6.1.6 to 6.1.9 in /changedetectionio/static/styles (#209)
Bumps [tar](https://github.com/npm/node-tar) from 6.1.6 to 6.1.9.
- [Release notes](https://github.com/npm/node-tar/releases)
- [Changelog](https://github.com/npm/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-tar/compare/v6.1.6...v6.1.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-03 16:17:45 +02:00
Matthias Langhard
05f7e123ed Adds 'Create Copy' feature to clone a watch (#184) 2021-08-26 22:10:17 +02:00
dgtlmoon
54d80ddea0 adding specific test (#205)
Regex UUID test
Co-authored-by: Minty <mmeminty@gmail.com>
2021-08-23 08:52:49 +02:00
Minty
b9e0ad052f New notification tokens - watch_uuid, watch_title, watch_tag, (#201)
* New notification tokens ; Tokens added: watch_uuid, watch_title, watch_tag, updated settings description
2021-08-22 22:36:10 +02:00
dgtlmoon
f8937e437a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-08-22 18:49:29 +02:00
dgtlmoon
fbe9270528 Re #203 - validate tokens (#204)
* Re #203 - validate tokens
2021-08-22 18:45:32 +02:00
dgtlmoon
58c3bc371d No point hiding the notifications customisation area because it's now in its own tab 2021-08-22 18:44:51 +02:00
dgtlmoon
4683b0d120 Update README.md 2021-08-20 18:24:49 +02:00
dgtlmoon
5fb9bbdfa3 Test - prove that notifications are not being sent when content does not change 2021-08-19 18:58:30 +02:00
dgtlmoon
5883e5b920 remove quotes from env vars 2021-08-19 16:55:28 +02:00
dgtlmoon
b99957f54a Re https://github.com/dgtlmoon/changedetection.io/discussions/189
A note to not use quotes in env parts
2021-08-19 16:43:41 +02:00
dgtlmoon
21cb7fbca9 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-08-19 16:37:00 +02:00
dgtlmoon
4ed5d4c2e7 WebDriver fetcher - settings - when an alternative one is configured, show it in the label 2021-08-19 16:36:29 +02:00
dgtlmoon
8c3163f459 Update README.md 2021-08-16 16:34:44 +02:00
dgtlmoon
a11b6daa2e Installation via pip (#186)
Builder for https://pypi.org/project/changedetection.io/
2021-08-16 15:24:37 +02:00
dgtlmoon
642ad5660d Update README.md 2021-08-16 13:57:43 +02:00
dgtlmoon
252d6ee6fd Trigger text/wait (#187)
Re #71 - Ability to set filters
2021-08-16 13:13:17 +02:00
dgtlmoon
ba7b6b0f8b Reword group tag - more obvious name 2021-08-15 22:16:18 +02:00
dgtlmoon
f2094a3010 Fix img alt/title accesibility for pause icon 2021-08-15 21:53:47 +02:00
dgtlmoon
b9ed7e2d20 Let the fetcher throw an exception which will be caught and handed to the operator anyway 2021-08-12 12:56:26 +02:00
dgtlmoon
6d3962acb6 Example placeholder was pushed out 2021-08-12 12:56:12 +02:00
dgtlmoon
32a0d38025 Move fetcher tab back to general - save space on mobile 2021-08-12 12:51:43 +02:00
dgtlmoon
df08d51d2a WebDriver test fetch should use environment var too 2021-08-12 12:33:31 +02:00
dgtlmoon
d87c643e58 Add fetch option to each watch 2021-08-12 12:28:17 +02:00
dgtlmoon
9e08f326be Chrome/Webdriver support for Javascript websites (#114)
JS Support via fetching the page over WebDriver/Selenium network
Refactor forms (Split into logical tabs)
2021-08-12 12:05:59 +02:00
dgtlmoon
1f821d6e8b Fixing tar npm security issue npm install "tar@>=6.1.2" 2021-08-07 14:20:13 +02:00
dgtlmoon
00fe4d4e41 tag 0.38.2 2021-08-07 14:18:28 +02:00
dgtlmoon
f88561e713 Re #172 - be sure that we are non-greedy matching the first : when splitting the headers so we dont break "Cookie" header (#175) 2021-08-07 14:15:41 +02:00
dgtlmoon
dd193ffcec Update heroku.yml
Re #156 - You can specify the port here too, to be sure
2021-07-28 17:18:10 +02:00
dgtlmoon
1e39a1b745 Re #156 - PORT should always be an Integer 2021-07-28 13:59:50 +02:00
Leigh
1084603375 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-26 07:11:54 +02:00
Leigh
3f9d949534 Re #159 - Adding env var example to docker-config.yml 2021-07-26 07:10:57 +02:00
Tim Chepeleff
684deaed35 Add Heroku Deployment Support (#159)
* add heroku.yml

* Use environment supplied port

* Update changedetection.py
2021-07-26 06:43:23 +02:00
Leigh
1b931fef20 Re #154 - Handle missing JSON better 2021-07-25 13:55:28 +02:00
Leigh
d1976db149 high res 2021-07-25 09:18:05 +02:00
Leigh
a8fb17df9a higher res screenshot 2021-07-25 09:17:02 +02:00
Leigh
8f28c80ef5 Update screenshot 2021-07-25 09:16:07 +02:00
Leigh
5a2c534fde Assert that html_tools.JSONNotFound is correctly raised 2021-07-25 07:22:29 +02:00
dgtlmoon
e2304b2ce0 Re #154 Ldjson extract parse (#158)
* Use parsable JSON hiding in <script type="application/ld+json"> where possible, if it matches the filter rule, use it.
* Update README.md
2021-07-25 07:02:19 +02:00
dgtlmoon
b87236ea20 Responsive fix for input field on mobile 2021-07-22 21:39:41 +10:00
dgtlmoon
dfbc9bfc53 Re #148 - Always set something for {base_url} so we dont send possibly an empty body/title notification which could break some services. 2021-07-22 20:09:42 +10:00
dgtlmoon
f3ba051df4 Add medium-size-desktop class to notification custom title 2021-07-22 20:06:27 +10:00
dgtlmoon
affe39ff98 Notification default: Make sure to use atleast some text here, a blank notification body could be problematic for some services 2021-07-22 20:00:51 +10:00
dgtlmoon
0f5d5e6caf Re #150 - stop using 'size' across all elements and rely on CSS for a better mobile experience (stops fields from pushing out) 2021-07-22 19:38:10 +10:00
Preston
2a66ac1db0 fix: setting overflow in mobile view (#150) 2021-07-22 18:54:01 +10:00
dgtlmoon
07308eedbd Re #121, #123 - Show the current base_url value 2021-07-22 10:52:29 +10:00
dgtlmoon
750b882546 Re #149 - allow empty timestamp limit for scrub operation 2021-07-22 10:32:42 +10:00
dgtlmoon
1c09407e24 Dont show "new version available" message when password is enabled and user is logged out 2021-07-21 21:47:42 +10:00
dgtlmoon
7e87591ae5 test fix - dont trigger notifications in header test 2021-07-21 20:31:52 +10:00
dgtlmoon
9e6c2bf3e0 Strengthen the notification tests 2021-07-21 20:21:12 +10:00
dgtlmoon
c396cf8176 Re #137 - Adding test to confirm that headers are not repeated 2021-07-21 19:51:12 +10:00
dgtlmoon
b19a037fac Add debug output to notify loop 2021-07-21 13:13:31 +10:00
dgtlmoon
5cd4a36896 Add note to field 2021-07-21 13:05:30 +10:00
dgtlmoon
aec3531127 Cleanup test helper data before and after running 2021-07-21 12:49:32 +10:00
dgtlmoon
78434114be Improve debug info 2021-07-21 12:49:22 +10:00
dgtlmoon
f877cbfe8c 0.38.1 tag 2021-07-20 17:57:27 +10:00
dgtlmoon
fe4963ec04 Re #143 - Remove old notification test code, fix form handler (#145)
* Re #143 - global notification settings box fix - Remove old notification test code, fix form handler, add test
2021-07-20 17:44:01 +10:00
dgtlmoon
32a798128c Update README.md 2021-07-18 18:15:44 +10:00
dgtlmoon
cf4e294a9c Re #135 - refactor the quick add widget (#136)
* Re #135 - refactor the quick add widget

* Fix W3C validation issues
2021-07-18 13:26:23 +10:00
Richard Schwab
b008269a70 Partially revert 47e5a7cf09 (#138)
Copy HTTP headers from the global template instead of updating the global template when fetching a site.

fixes #137
2021-07-18 10:12:23 +10:00
dgtlmoon
50026ee6d9 use a github action for getting the tag 2021-07-16 16:24:01 +10:00
dgtlmoon
aa5ba7b3a9 rename tag build runner 2021-07-16 16:19:04 +10:00
dgtlmoon
4110d05bf8 fix tag 2021-07-16 16:12:03 +10:00
dgtlmoon
6c02bc9cd3 build and push tag 2021-07-16 16:03:45 +10:00
dgtlmoon
0a9b5f801f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-14 20:21:14 +10:00
dgtlmoon
b4630d4200 Re #76 - Fixing links 2021-07-14 20:20:47 +10:00
dgtlmoon
2238b7d660 Cleaner is to let flexbox overflow and scroll on the X where needed 2021-07-14 14:46:18 +10:00
dgtlmoon
e6fadc44fa #76 app path prefix when behind proxy_pass (#91)
Support for running in a sub-path under proxy_pass (Running changedetection.io behind a reverse proxy sub directory) 
More here https://github.com/dgtlmoon/changedetection.io/wiki/Running-changedetection.io-behind-a-reverse-proxy-sub-directory
2021-07-14 14:02:24 +10:00
dgtlmoon
c0b6233912 Settings: Remove password link fix 2021-07-14 13:38:32 +10:00
dgtlmoon
9669f8248e Make sure right menu is still visible when URL is long 2021-07-14 13:36:58 +10:00
dgtlmoon
b2b8958f7b 0.38 release 2021-07-14 11:51:33 +10:00
dgtlmoon
83daa6f630 Re #132 - Make a list of the JSONpath results instead of using only the first value 2021-07-14 11:15:32 +10:00
dgtlmoon
dad48402f1 Customisable notifications (#123)
* Customisable notifications (#121)
* Test improvements
* Setup BASE_URL environment in test

Co-authored-by: dtomlinson91 <53234158+dtomlinson91@users.noreply.github.com>
2021-07-13 18:48:21 +10:00
dgtlmoon
655a350f50 Re #117 - dont re-encode single value types, looks better in the diff 2021-07-12 18:27:03 +10:00
dgtlmoon
ae0fc5ec0f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-11 23:05:22 +10:00
dgtlmoon
851142446d Usability tweak - [edit] on diff page should go back to diff page 2021-07-11 22:56:43 +10:00
dgtlmoon
dc2896c452 Update README.md 2021-07-11 22:11:53 +10:00
dgtlmoon
306814f47f Adding text about JSON API Monitoring 2021-07-11 22:10:49 +10:00
dgtlmoon
e073521f4d Re #117 Jsonpath based JSON change detection filter (#125)
* Re #117 - Experimental JSON selector support by using 'json:' prefix and any JSONpath rule
2021-07-11 22:07:39 +10:00
dgtlmoon
f2643c1b65 Update README.md 2021-07-11 19:38:54 +10:00
dgtlmoon
0e291de045 Update README.md 2021-07-11 19:36:44 +10:00
dgtlmoon
2f22d627fa Use right sticky for version 2021-07-10 23:14:59 +10:00
dgtlmoon
cd622261e9 Re #118 - Make 'show current version' more obvious 2021-07-10 23:07:46 +10:00
dgtlmoon
39a696fc7c Diff page - use the document title in <title> for better bookmarking 2021-07-10 16:31:16 +10:00
dgtlmoon
db5afa1fa2 node-sass 6.0.1 works with node-sass watch way better 2021-07-06 23:04:40 +10:00
dgtlmoon
56c56c63e8 Updating inscriptis/text/html library to 1.2 2021-07-04 23:09:49 +10:00
dgtlmoon
cb0d69801f Update readme with the branch link for javascript support 2021-07-04 13:51:19 +10:00
dgtlmoon
99ddc0490b Updating trim-newlines packages 2021-07-03 12:04:26 +10:00
dgtlmoon
b27d03e8c7 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-02 20:19:26 +10:00
dgtlmoon
f852bdda0e 0.37 release 2021-07-02 20:18:41 +10:00
dgtlmoon
b85af8904a #110 global recheck time (#113)
* Re #106 - handling empty title with gettr cleanup

* Re #110 - Global recheck time improvements, add tests, add form feedback, follow default minutes

* Adding comments
2021-07-02 12:14:09 +10:00
dgtlmoon
db18866b0a Re #106 - handling empty title with gettr cleanup (#107) 2021-06-27 12:29:41 +10:00
dgtlmoon
3fa6bc5ffd Update README.md
Adding more docker start help
2021-06-26 13:34:40 +10:00
dgtlmoon
25185e6d00 Auto extract html title as title (#102)
* Auto extract <title> as watch title, Minor refactor for html tooling
2021-06-24 19:10:19 +10:00
dgtlmoon
9af1ea9fc0 Bug fix - Check 'minutes_between_check' is set 2021-06-24 11:26:16 +10:00
dgtlmoon
aa51c7d34c tweak <pre> text wrapping when displaying diff 2021-06-23 21:05:22 +10:00
dgtlmoon
f215adbbe5 CSS Filter - Smarter is to just extract the HTML blob and continue with inscriptus, so we have almost the same output as not using the filter 2021-06-23 20:40:01 +10:00
dgtlmoon
8d59ef2e10 CSS Filter - restore nicer linefeeds 2021-06-23 12:52:04 +10:00
dgtlmoon
e3a9847f74 @todo Comment - BS4's element.get_text() seems to lose the indentation format no-matter what 2021-06-23 12:49:53 +10:00
dgtlmoon
47f7698b32 CSS Filter - strip text of whitespacing, preserve new lines where applicable, remove extra newlines 2021-06-23 12:29:14 +10:00
dgtlmoon
c6a4709987 Include statistics for number of watches 2021-06-22 11:40:45 +10:00
dgtlmoon
6c35995cff Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-22 11:21:29 +10:00
dgtlmoon
fa6c31fd50 Set edit-form for settings+watch to always be wide 2021-06-22 11:20:51 +10:00
dgtlmoon
58dfeaeec8 Update README.md 2021-06-22 10:33:27 +10:00
dgtlmoon
f717ad1bb6 0.36 2021-06-22 10:23:58 +10:00
dgtlmoon
8a0b33c1e8 Re #42 - dont use blank titles 2021-06-22 10:21:53 +10:00
dgtlmoon
f762d889f9 Re #100 - Fixing storage of minutes_between_check and adding automated test for field storage 2021-06-22 10:16:56 +10:00
dgtlmoon
d82465d428 0.35 2021-06-22 00:28:41 +10:00
dgtlmoon
74cf72c9cd Time between rechecks is always stored as minutes 2021-06-22 00:25:34 +10:00
dgtlmoon
03c1ad3989 Ability to reset app password by placing a file called removepassword.lock into your data directory and restarting the instance 2021-06-21 22:57:48 +10:00
dgtlmoon
ed7c2f01da Adding tests for password control handling 2021-06-21 22:36:09 +10:00
dgtlmoon
0923aa5b73 Remove unused field (removepassword is actually a link) 2021-06-21 22:32:59 +10:00
dgtlmoon
04acd8b2f8 0.34 2021-06-21 22:13:14 +10:00
dgtlmoon
45bd454e26 Be sure not to use blank passwords as the password 2021-06-21 22:12:47 +10:00
dgtlmoon
a429223858 Re #42 - custom title (#98) 2021-06-21 21:44:58 +10:00
dgtlmoon
59eb83974e Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 20:08:42 +10:00
dgtlmoon
d4928e34eb 0.33 2021-06-21 20:07:04 +10:00
dgtlmoon
8bcc277310 Re #92 - Re-use existing [preview] function for viewing current (#97) 2021-06-21 19:35:13 +10:00
dgtlmoon
53b9640ac5 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 17:32:06 +10:00
dgtlmoon
854520005d #81 - Regex support (#90)
* Re #81 - Regex support
* minor cleanup
2021-06-21 17:17:22 +10:00
dgtlmoon
4dbfd376f2 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 16:21:30 +10:00
dgtlmoon
af24079053 Use wtforms handler (#96)
Refactor forms and styling with wtforms
2021-06-21 16:21:05 +10:00
dgtlmoon
a91c4dbe92 Re #95 - Include PUID/PGID example 2021-06-21 10:03:08 +10:00
dgtlmoon
3f9fab3944 re-enable tests 2021-06-21 09:48:52 +10:00
dgtlmoon
1772568559 On settings submit, display saved message 2021-06-19 10:20:48 +10:00
dgtlmoon
fa3ce97634 Use flasks' built in 'flash' method instead of a custom message/notices (#94)
* Use flasks' built in 'flash' method instead of a custom message/notice handler

* Move app.secret_key setup to inside app
2021-06-18 22:04:29 +10:00
dgtlmoon
fed2de66a0 Adding rPi support info 2021-06-18 22:00:33 +10:00
dgtlmoon
e761405f58 Re #92 Adding link to CSS selector help in wiki 2021-06-18 11:37:27 +10:00
dgtlmoon
23738c98bc Re #93 - tweak build packages 2021-06-17 23:04:51 +10:00
dgtlmoon
07c7663e56 Re #93, #79 - docker image multistage build lost the packages required for rPi etc 2021-06-17 22:42:07 +10:00
Leonardo Brondani Schenkel
cec45a7ad7 Strip surrounding whitespace from elements (#89) 2021-06-16 13:57:22 +10:00
dgtlmoon
dc62bcdfca Queue an entry for immediate recheck after [edit] 2021-06-16 13:38:01 +10:00
dgtlmoon
d304449cb1 Adding helper method to remove text files that are not in the index 2021-06-16 10:57:55 +10:00
dgtlmoon
878584f043 Fix typo 2021-06-15 14:34:10 +10:00
dgtlmoon
b4fa7d2089 Re #88 - placeholder text on CSS rule 2021-06-15 14:13:01 +10:00
dgtlmoon
b0592df3cb Re #86 - fix typo 2021-06-15 11:14:16 +10:00
dgtlmoon
ddd8bd34f2 0.32 release 2021-06-15 09:50:24 +10:00
dgtlmoon
afea79adf9 Sassify the diff page 2021-06-14 21:04:06 +10:00
dgtlmoon
444510c9ca "Sassify" the theme, easier to manage 2021-06-14 20:42:42 +10:00
dgtlmoon
1f1d2708c6 Mobile fixes (#87)
#48 - Settings page on android didnt work
- Responsive table layout for the watch list
- Few more improvements
2021-06-14 19:40:41 +10:00
dgtlmoon
bae6641777 Re #86 - Refactor scrub date limit code 2021-06-14 17:56:09 +10:00
dgtlmoon
17830de489 Tweak comments 2021-06-13 11:02:11 +10:00
dgtlmoon
0acf9cc9cb Re #77 - Repair and refactor time threshold check code 2021-06-13 10:59:15 +10:00
khakers
cff8959462 Modifies Dockerfile to use multistage builds (#79) 2021-06-08 12:30:45 +10:00
dgtlmoon
4b6522469b Bumping to 0.31 2021-06-05 16:37:16 +10:00
dgtlmoon
609a0a3aad Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-03 10:51:18 +10:00
dgtlmoon
ad8065c072 Re #75 - Adding test to confirm watched URL appears in RSS feed 2021-06-03 10:50:59 +10:00
dgtlmoon
2346b42ef2 CSS selector filter (#73)
* Re #9 CSS Selector filtering,  Adding test for #9
2021-05-30 21:22:26 +10:00
dgtlmoon
1a0c3f1250 Fixing var name 2021-05-28 10:27:01 +10:00
dgtlmoon
91f69b92a2 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-05-28 10:20:53 +10:00
dgtlmoon
dd211d166c Include release metadata during github build 2021-05-28 10:20:42 +10:00
dgtlmoon
a6b0a23143 Update README.md 2021-05-28 00:03:17 +10:00
dgtlmoon
a03e53d826 Re #40 Ability to set individual timers (#72)
* Re #40 Ability to set individual timers
2021-05-27 23:55:05 +10:00
dgtlmoon
5d93009605 Update README.md 2021-05-27 20:37:56 +10:00
dgtlmoon
d4f3e744de Improvements for backup (#70)
* Remove previous backup files

* Backup - Add a text file containing only the URLs, with Windows+UNIX line-endings, for better portability.

* Fix filename on backup not being correct
2021-05-27 20:16:40 +10:00
dgtlmoon
13de31cf98 Update README.md 2021-05-26 21:26:35 +10:00
dgtlmoon
54ae82395a Disable image layer cache service 2021-05-25 16:46:13 +10:00
dgtlmoon
dba8944625 Re-enable ARM v6/v7 builds 2021-05-25 16:08:01 +10:00
dgtlmoon
270343b276 Install requirements, remove rust and dev packages that are no longer needed, hopefully for a smaller docker layer size 2021-05-25 15:06:35 +10:00
dgtlmoon
f3ce9b732c Remove rust build comments 2021-05-25 15:05:36 +10:00
dgtlmoon
baaee30499 Arm build fixes (#68)
* Add rustc compiler and remove when not needed (smaller docker layer)

* Using the magical ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1 to get around ARM issues
2021-05-25 15:04:33 +10:00
dgtlmoon
d50ff0b31c Re #65 - Append BASE_URL env var to the notification if it is set (#66)
* Re #65 - Append BASE_URL env var to the notification if it is set
2021-05-21 09:16:19 +10:00
dgtlmoon
395a6fca62 Update README.md 2021-05-19 13:09:41 +10:00
dgtlmoon
f582810ad0 Adding BTC support instructions 2021-05-18 23:34:56 +10:00
dgtlmoon
18b71edd6d Switch to just amd64 for now due to apprise not building on ARM 2021-05-15 21:23:05 +10:00
dgtlmoon
28f6af9153 Fixing syntax 2021-05-15 18:20:34 +10:00
dgtlmoon
63a3492547 Re #49 Re #60 - Adding more information about proxy setup to README.md 2021-05-15 18:13:00 +10:00
Unpublished
454fc26341 Add socks proxy support (#60)
* Add socks proxy support

* Add proxy config to README
2021-05-15 18:05:58 +10:00
KibosJ
e5409f8d16 Created docker-compose file (#55)
* Created docker-compose file, Removed version tag as per latest compose specification
2021-05-15 11:48:38 +10:00
dgtlmoon
1b736b3726 Re #58 - reduce to 1 minute (a small rewrite is required to change the backend to store in 'seconds' instead of minutes) 2021-05-13 22:33:33 +10:00
dgtlmoon
96f2b0d248 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-05-13 22:24:45 +10:00
dgtlmoon
308527f45e 56 - Fix notification test 2021-05-13 22:23:49 +10:00
dgtlmoon
70d766b647 Update README.md 2021-05-08 23:16:16 +10:00
dgtlmoon
40be9c615f Update README.md 2021-05-08 23:15:50 +10:00
dgtlmoon
f380754ff5 Adding rust compiler :( 2021-05-08 12:27:39 +10:00
dgtlmoon
bee6bd9fe0 trying without libssl and only libffi 2021-05-08 12:17:28 +10:00
dgtlmoon
fec2862ebe Adding extra libs required for build 2021-05-08 12:01:52 +10:00
dgtlmoon
969420e40b Cleanup docs 2021-05-08 11:44:43 +10:00
dgtlmoon
afba06dd1f Tweak workflow (tests) 2021-05-08 11:38:27 +10:00
dgtlmoon
1d66160e8c Security update 2021-05-08 11:33:46 +10:00
dgtlmoon
f877af75b9 Apprise notifications (#43)
* issue #4 Adding settings screen for apprise URLS
* Adding test notification mechanism

* Move Worker module to own class file

* Adding basic notification URL runner

* Tests for notifications

* Tweak readme with notification info

* Move notification test to main test_backend.py

* Fix spacing

* Adding notifications screenshot

* Cleanup more files from test

* Offer send notification test on individual edits and main/default

* Process global notifications

* All branches test

* Wrap worker notification process in try/catch, use global if nothing set

* Fix syntax

* Handle exception, increase wait time for liveserver to come up

* Fixing test setup

* remove debug

* Split tests into their own totally isolated setups, if you know a better way to make live_server() work, MR :)

* Tidying up lint/imports
2021-05-08 11:29:41 +10:00
dgtlmoon
b752690f89 Fixing security update 2021-05-08 10:19:49 +10:00
dgtlmoon
a10efa951b Also detect pytest in the environ (for local debug) 2021-05-03 11:20:11 +10:00
dgtlmoon
24a38f26f8 Prepend 'test-' when runnning under pytest to guid 2021-05-03 11:03:00 +10:00
dgtlmoon
1d0018dced - Relabel login button
- misc test cleanup
2021-05-01 11:55:24 +10:00
dgtlmoon
18c7a18be8 Re #46 - Add note to README.md about Javascript support 2021-05-01 10:02:43 +10:00
dgtlmoon
c11adcbe4a Bumping version 2021-05-01 01:20:56 +10:00
dgtlmoon
cd6ce89587 Re #45 - Set datastore path in app.config 2021-05-01 01:18:59 +10:00
dgtlmoon
4164ad29e3 Re #44 - Broke the menu by accident, adding tests and fixing. 2021-04-30 19:54:23 +10:00
dgtlmoon
4953e253e9 bump to 0.29 2021-04-30 17:17:23 +10:00
dgtlmoon
64e172433a docker-compose for dev not needed (use venv etc) 2021-04-30 16:54:07 +10:00
dgtlmoon
92c0fa90ee Password protection / login support (#34)
Issue #24 Password login  hashlib.pbkdf2_hmac implementation
2021-04-30 16:47:13 +10:00
dgtlmoon
ee8053e0e8 Update FUNDING.yml 2021-04-21 11:13:50 +10:00
dgtlmoon
7f5b592f6f Skip using tag limit on pause when no tag is being viewed 2021-04-16 10:29:03 +10:00
dgtlmoon
1e45156bc0 Pause/Unpause should respect limit tag on redirect 2021-04-10 19:47:31 +09:30
dgtlmoon
c7169ebba1 Validate duplicate URLs 2021-04-10 14:31:57 +09:30
dgtlmoon
a58679f983 Chdir is not needed because we add the file from the full path, but make it 'relative' in the Zip 2021-04-09 04:50:55 +02:00
dgtlmoon
661542b056 Fix backup generation on relative paths (like when run outside docker, under venv, etc) 2021-04-09 04:49:50 +02:00
dgtlmoon
2ea48cb90a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-04-04 06:32:04 +02:00
dgtlmoon
2a80022cd9 Adding noopener per CodeQL, stop pages from knowing the referer etc 2021-04-04 06:31:42 +02:00
dgtlmoon
8861f70ac4 Create codeql-analysis.yml 2021-04-04 06:27:32 +02:00
dgtlmoon
07113216d5 yarl not needed, lock requests version 2021-04-03 10:28:11 +02:00
dgtlmoon
02062c5893 dev packages needed, drop apt cache 2021-04-03 09:05:02 +02:00
dgtlmoon
a11f09062b See if we get a clean buildx without dev packages 2021-04-03 08:45:24 +02:00
dgtlmoon
0bb48cbd43 Tweaking build size thanks to https://github.com/hadolint/hadolint 2021-04-03 08:04:42 +02:00
dgtlmoon
7109a17a8e Adding dockerignore 2021-04-03 07:59:22 +02:00
dgtlmoon
4ed026aba6 Re #18 - Show "preview" of the page when only one revision exists (#33) 2021-04-03 05:55:43 +02:00
dgtlmoon
3b79f8ed4e Update README.md 2021-04-02 05:00:58 +02:00
dgtlmoon
5d02c4fe6f Update README.md 2021-04-02 04:58:49 +02:00
dgtlmoon
f2b06c63bf Also check that the watch is not paused before putting it into the checking queuex 2021-04-02 03:58:23 +02:00
dgtlmoon
ab6f4d11ed revert c60be56271 2021-04-02 03:07:36 +02:00
dgtlmoon
5311a95140 remove extra packages (#32)
* remove extra packages

* add test only workflow
2021-04-02 02:57:48 +02:00
dgtlmoon
fb723c264d Bumping version to 0.28 2021-04-01 14:43:46 +02:00
dgtlmoon
3ad722d63c Docker push amd64 rpi etc (#28)
* trying multiarch docker hub push on build, similar to https://github.com/dgtlmoon/changedetection.io/pull/25/files

* Adding image builder

* Include our dev branch

* Tweak buildx

* dont use alias

* Finally found the right info at https://docs.docker.com/ci-cd/github-actions/

* Updated from https://github.com/razorpay/docker-build-push-action

* Teaks to build

* Tweaks

* Minor tweaks to version

* tweaks

* Remove version

* Remove old workflow

* syntax cleanup
2021-04-01 14:10:23 +02:00
dgtlmoon
9c16695932 Open [diff] links into their own window 2021-04-01 12:57:47 +02:00
dgtlmoon
35fc76c02c Fix auto jump on viewing the diff 2021-04-01 12:53:19 +02:00
dgtlmoon
934d8c6211 Re #30 - Delete history watch snapshots (#31)
Re #30 - Delete history watch snapshots  Scrub - Optionally delete history snapshots newer than timestamp
2021-04-01 12:01:42 +02:00
dgtlmoon
294256d5c3 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-03-29 18:38:20 +02:00
dgtlmoon
b7efdfd52c Slow down the DB write interval and catch the case that it changed during write 2021-03-29 18:37:03 +02:00
dgtlmoon
6a78b5ad1d Immediately 'jump' to the change 2021-03-29 18:36:50 +02:00
dgtlmoon
98f3e61314 Tweak to hover pause icon 2021-03-29 18:36:31 +02:00
dgtlmoon
e322c44d3e Stop runtime error on dict changing during write/init at start (#27)
* Lock datastore when writing

* Racecase fix

* Tweaks to locking (add delay)
2021-03-29 18:23:13 +02:00
dgtlmoon
7b226e1d54 Merge pull request #26 from dgtlmoon/pause
Re #22 - ability to pause
2021-03-29 16:14:16 +02:00
dgtlmoon
35e597a4c8 Re #22 - ability to pause 2021-03-29 16:11:22 +02:00
dgtlmoon
0a1a8340c2 Re #23 - always check value of interval time, not just on start 2021-03-29 15:04:15 +02:00
dgtlmoon
8b5cd40593 Update README.md 2021-03-26 11:07:06 +01:00
dgtlmoon
7d978a6e65 Merge pull request #19 from dgtlmoon/markdown-tweak
Use absolute image links so the screenshots work from docker hub
2021-03-04 09:59:37 +01:00
dgtlmoon
fdab52d400 Use absolute image links so the screenshots work from docker hub 2021-03-04 09:58:58 +01:00
dgtlmoon
782795310f Update README.md
Removing text that is tricky to maintain and confusing
2021-03-03 09:01:14 +01:00
Leigh Morresi
2280e6d497 Updating screenshot 2021-03-01 16:12:30 +01:00
Leigh Morresi
822f3e6d20 Reuse the GUID if we have one 2021-03-01 16:01:53 +01:00
dgtlmoon
35546c331c Merge pull request #15 from dgtlmoon/dev
Prepare 0.27
2021-03-01 15:50:25 +01:00
Leigh Morresi
982a0d7781 Dont show 'empty' tag, it will be in the [ALL] list 2021-03-01 15:44:34 +01:00
Leigh Morresi
c5c3e8c6c2 Adding RSS feed icon 2021-03-01 15:39:36 +01:00
Leigh Morresi
ff1b19cdb8 Generic object sync should use private method 2021-03-01 15:32:59 +01:00
Leigh Morresi
df96b8d76c Add missing urllib3 2021-03-01 15:21:15 +01:00
Leigh Morresi
89134b5b6c Add missing pytz 2021-03-01 15:11:03 +01:00
Leigh Morresi
b31bf34890 Check for new version 2021-03-01 15:09:37 +01:00
Leigh Morresi
5b2fda1a6e Fix import form flow logic 2021-03-01 14:33:25 +01:00
Leigh Morresi
fb38b06eae Code tidy/lint 2021-03-01 14:31:45 +01:00
Leigh Morresi
e0578acca2 Tidy up thread logic and version check 2021-03-01 14:29:21 +01:00
Leigh Morresi
187523d8d6 Add missing dep 2021-03-01 12:45:56 +01:00
Leigh Morresi
b0975694c8 Remove todos 2021-03-01 11:52:29 +01:00
Leigh Morresi
b1fb47e689 Add icon for RSS, RSS should show only unviewed entries 2021-03-01 11:51:28 +01:00
Leigh Morresi
a82e9243a6 Issue #7 - RSS feeds 2021-03-01 11:25:04 +01:00
Leigh Morresi
e3e36b3cef Always override tag version (load from disk in future, so we can add it at build time) 2021-02-27 23:20:40 +01:00
Leigh Morresi
cd6465f844 next dev is 0.27 2021-02-27 22:49:56 +01:00
Leigh Morresi
30d53c353f Tweak to tests 2021-02-27 22:09:25 +01:00
Leigh Morresi
47fcb8b4f8 Move logic 2021-02-27 22:01:42 +01:00
Leigh Morresi
0ec9edb971 Remove erroneous extra liveserver setup 2021-02-27 20:30:36 +01:00
Leigh Morresi
f1da8f96b6 When new ignore text is specified, reprocess the checksum 2021-02-27 20:30:06 +01:00
Leigh Morresi
8bc7b5be40 Adding filter and log output to pytest 2021-02-27 20:29:52 +01:00
Leigh Morresi
022826493b Fix edit action link 2021-02-27 20:29:01 +01:00
Leigh Morresi
092f77f066 Minor lint cleanup 2021-02-27 09:38:51 +01:00
Leigh Morresi
013cbcabd4 Clean up after test case 2021-02-27 09:37:40 +01:00
Leigh Morresi
66be95ecc6 Remove liveserver, doesnt belong here 2021-02-27 09:08:25 +01:00
Leigh Morresi
efe0356f37 Fix syntax, Triggers the workflow on push or pull request events 2021-02-27 09:06:54 +01:00
Leigh Morresi
ec1ac300af Activate workflow on all branches 2021-02-27 09:05:25 +01:00
Leigh Morresi
468184bc3a Issue #14 - Tweaks to edit, create ignore text, tests for ignore text, integrate ignore text 2021-02-26 20:07:26 +01:00
Leigh Morresi
0855017dca Validation of added headers, should contain key/val (2 parts) 2021-02-26 16:52:14 +01:00
Leigh Morresi
ae0f640ff4 Issue #12 include version for easy reference. 2021-02-24 14:44:35 +01:00
Leigh Morresi
cd6629ac2d Bring dev environment inline 2021-02-24 14:44:28 +01:00
Leigh Morresi
3c3ca7944b Tidying up requirements.txt 2021-02-24 14:44:13 +01:00
Leigh Morresi
b0fb52017c Handle the case of someone supplying a bad link 2021-02-24 09:56:29 +01:00
Leigh Morresi
fc6fba377a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-02-24 09:53:58 +01:00
Leigh Morresi
7ea39ada7c Adding jump to next change diff widget 2021-02-24 09:53:40 +01:00
dgtlmoon
e98ea37342 Moving nice screenshot to above the fold :) 2021-02-22 16:39:04 +01:00
dgtlmoon
e20577df15 Adding docker hub badge for tag information 2021-02-22 14:48:57 +01:00
Leigh Morresi
19dcbc2f08 Bumping schema tag to 0.25 2021-02-22 08:53:04 +01:00
Leigh Morresi
c59838a6e4 Issue #5 - Remove arbitrary '600' minutes limit 2021-02-22 08:38:41 +01:00
Leigh Morresi
0a8c339535 Add test delay for github action test 2021-02-21 21:08:04 +01:00
Leigh Morresi
cd5b703037 Add wait for threads in test 2021-02-21 20:54:15 +01:00
Leigh Morresi
90642742bd Extending tests to cover resetting the diff/unviewed status correctly 2021-02-21 20:46:56 +01:00
Leigh Morresi
96221598e7 Tidy up return logic 2021-02-21 20:23:50 +01:00
Leigh Morresi
98623de38c Code tidy 2021-02-21 20:14:35 +01:00
Leigh Morresi
33985dbd9d Fix docker app files paths 2021-02-21 16:31:42 +01:00
Leigh Morresi
a3a5ca78bf Tweaking Dockerfile for new eventlet wrapper 2021-02-21 16:13:55 +01:00
dgtlmoon
3fcbbb3fbf Create LICENSE 2021-02-21 15:42:45 +01:00
dgtlmoon
70252b24f9 Adding docker pulls counter badge 2021-02-21 15:39:17 +01:00
dgtlmoon
0a08616c87 Merge pull request #11 from dgtlmoon/pytest
Separate flask from eventlet runtime and get pytest working
2021-02-21 15:22:54 +01:00
Leigh Morresi
beebba487c Use master branch for badge 2021-02-21 15:21:30 +01:00
Leigh Morresi
cbeafcbaa0 Removing unused import 2021-02-21 14:26:58 +01:00
Leigh Morresi
e200cd3289 Fixing a few more easy lint wins 2021-02-21 14:26:19 +01:00
Leigh Morresi
22c7a1a88d Merge branch 'pytest' of github.com:dgtlmoon/changedetection.io into pytest 2021-02-21 14:21:45 +01:00
Leigh Morresi
63eea2d6db Linting fixups 2021-02-21 14:21:14 +01:00
dgtlmoon
3e9a110671 Update README.md 2021-02-21 14:15:21 +01:00
Leigh Morresi
22bc8fabd1 Add badge under pytest branch 2021-02-21 14:14:27 +01:00
Leigh Morresi
9030070b3d Merge branch 'master' into pytest 2021-02-21 14:09:49 +01:00
dgtlmoon
fca7bb8583 Create python-app.yml 2021-02-21 14:09:34 +01:00
Leigh Morresi
3c175bfc4a Create the test datastore 2021-02-21 14:08:34 +01:00
Leigh Morresi
fd5475ba38 Minor cleanup 2021-02-21 14:05:52 +01:00
Leigh Morresi
b0c5dbd88e Just use the current/previous md5 2021-02-21 13:46:16 +01:00
Leigh Morresi
1718e2e86f Finalse pytest methods 2021-02-21 13:41:00 +01:00
Leigh Morresi
b46a7fc4b1 Port should be an integer 2021-02-21 13:40:48 +01:00
Leigh Morresi
4770ebb2ea Tweaking client 2021-02-16 21:48:38 +01:00
Leigh Morresi
d4db082c01 remove unused imports 2021-02-16 21:44:44 +01:00
Leigh Morresi
c8607ae8bb Use session/client fixture 2021-02-16 21:42:26 +01:00
Leigh Morresi
b361a61d18 Addingmissing files 2021-02-16 21:36:41 +01:00
Leigh Morresi
87f4347fe5 hack of pytest implementation - doesnt work yet 2021-02-16 21:35:28 +01:00
Leigh Morresi
93ee65fe53 Tidy up a few broken datastore paths 2021-02-12 19:43:05 +01:00
Leigh Morresi
9f964b6d3f WIP, separate out the Flask from everything else, get pytest working 2021-02-12 19:24:30 +01:00
Leigh Morresi
426b09b7e1 Make records in the overview that have a difference that have not been viewed in the [diff] tab bold 2021-02-11 10:36:54 +01:00
Leigh Morresi
ec98415c4d Adding 0.24 tag 2021-02-05 18:46:00 +01:00
Leigh Morresi
47e5a7cf09 Avoid accidently using Python's objects that are copied - but land as a 'soft reference', need to use a better dict struct in the future #6 2021-02-05 18:43:35 +01:00
Leigh Morresi
d07cf53a07 Minor fix to 'last changed' field, simplify template and logic 2021-02-04 13:15:39 +01:00
Leigh Morresi
b9f73a6240 Remove debug print 2021-02-04 12:55:13 +01:00
327 changed files with 35155 additions and 2706 deletions

31
.dockerignore Normal file
View File

@@ -0,0 +1,31 @@
# Git
.git/
.gitignore
# GitHub
.github/
# Byte-compiled / optimized / DLL files
**/__pycache__
**/*.py[cod]
# Caches
.mypy_cache/
.pytest_cache/
.ruff_cache/
# Distribution / packaging
build/
dist/
*.egg-info*
# Virtual environment
.env
.venv/
venv/
# IntelliJ IDEA
.idea/
# Visual Studio
.vscode/

9
.github/FUNDING.yml vendored
View File

@@ -1,12 +1,3 @@
# These are supported funding model platforms
github: dgtlmoon
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

62
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,62 @@
---
name: Bug report
about: Create a bug report, if you don't follow this template, your report will be DELETED
title: ''
labels: 'triage'
assignees: 'dgtlmoon'
---
**DO NOT USE THIS FORM TO REPORT THAT A PARTICULAR WEBSITE IS NOT SCRAPING/WATCHING AS EXPECTED**
This form is only for direct bugs and feature requests todo directly with the software.
Please report watched websites (full URL and _any_ settings) that do not work with changedetection.io as expected [**IN THE DISCUSSION FORUMS**](https://github.com/dgtlmoon/changedetection.io/discussions) or your report will be deleted
CONSIDER TAKING OUT A SUBSCRIPTION FOR A SMALL PRICE PER MONTH, YOU GET THE BENEFIT OF USING OUR PAID PROXIES AND FURTHERING THE DEVELOPMENT OF CHANGEDETECTION.IO
THANK YOU
**Describe the bug**
A clear and concise description of what the bug is.
**Version**
*Exact version* in the top right area: 0....
**How did you install?**
Docker, Pip, from source directly etc
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea for this project
title: '[feature]'
labels: 'enhancement'
assignees: ''
---
**Version and OS**
For example, 0.123 on linux/docker
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe the use-case and give concrete real-world examples**
Attach any HTML/JSON, give links to sites, screenshots etc, we are not mind readers
**Additional context**
Add any other context or screenshots about the feature request here.

14
.github/dependabot.yml vendored Normal file
View File

@@ -0,0 +1,14 @@
version: 2
updates:
- package-ecosystem: github-actions
directory: /
schedule:
interval: "weekly"
"caronc/apprise":
versioning-strategy: "increase"
schedule:
interval: "daily"
groups:
all:
patterns:
- "*"

34
.github/test/Dockerfile-alpine vendored Normal file
View File

@@ -0,0 +1,34 @@
# Taken from https://github.com/linuxserver/docker-changedetection.io/blob/main/Dockerfile
# Test that we can still build on Alpine (musl modified libc https://musl.libc.org/)
# Some packages wont install via pypi because they dont have a wheel available under this architecture.
FROM ghcr.io/linuxserver/baseimage-alpine:3.21
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /requirements.txt
RUN \
apk add --update --no-cache --virtual=build-dependencies \
build-base \
cargo \
git \
jpeg-dev \
libc-dev \
libffi-dev \
libxslt-dev \
openssl-dev \
python3-dev \
zip \
zlib-dev && \
apk add --update --no-cache \
libjpeg \
libxslt \
nodejs \
poppler-utils \
python3 && \
echo "**** pip3 install test of changedetection.io ****" && \
python3 -m venv /lsiopy && \
pip install -U pip wheel setuptools && \
pip install -U --no-cache-dir --find-links https://wheel-index.linuxserver.io/alpine-3.21/ -r /requirements.txt && \
apk del --purge \
build-dependencies

62
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
schedule:
- cron: '27 9 * * 4'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'javascript', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v4
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v3
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v3
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v3

127
.github/workflows/containers.yml vendored Normal file
View File

@@ -0,0 +1,127 @@
name: Build and push containers
on:
# Automatically triggered by a testing workflow passing, but this is only checked when it lands in the `master`/default branch
# workflow_run:
# workflows: ["ChangeDetection.io Test"]
# branches: [master]
# tags: ['0.*']
# types: [completed]
# Or a new tagged release
release:
types: [published, edited]
push:
branches:
- master
jobs:
metadata:
runs-on: ubuntu-latest
steps:
- name: Show metadata
run: |
echo SHA ${{ github.sha }}
echo github.ref: ${{ github.ref }}
echo github_ref: $GITHUB_REF
echo Event name: ${{ github.event_name }}
echo Ref ${{ github.ref }}
echo c: ${{ github.event.workflow_run.conclusion }}
echo r: ${{ github.event.workflow_run }}
echo tname: "${{ github.event.release.tag_name }}"
echo headbranch: -${{ github.event.workflow_run.head_branch }}-
set
build-push-containers:
runs-on: ubuntu-latest
# If the testing workflow has a success, then we build to :latest
# Or if we are in a tagged release scenario.
if: ${{ github.event.workflow_run.conclusion == 'success' }} || ${{ github.event.release.tag_name }} != ''
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to GitHub Container Registry
uses: docker/login-action@v3
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub Container Registry
uses: docker/login-action@v3
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
# master branch -> :dev container tag
- name: Build and push :dev
id: docker_build
if: ${{ github.ref }} == "refs/heads/master"
uses: docker/build-push-action@v6
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:dev,ghcr.io/${{ github.repository }}:dev
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8,linux/arm64/v8
cache-from: type=gha
cache-to: type=gha,mode=max
# Looks like this was disabled
# provenance: false
# A new tagged release is required, which builds :tag and :latest
- name: Build and push :tag
id: docker_build_tag_release
if: github.event_name == 'release' && startsWith(github.event.release.tag_name, '0.')
uses: docker/build-push-action@v6
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:${{ github.event.release.tag_name }}
ghcr.io/dgtlmoon/changedetection.io:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:latest
ghcr.io/dgtlmoon/changedetection.io:latest
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8,linux/arm64/v8
cache-from: type=gha
cache-to: type=gha,mode=max
# Looks like this was disabled
# provenance: false
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}

76
.github/workflows/pypi-release.yml vendored Normal file
View File

@@ -0,0 +1,76 @@
name: Publish Python 🐍distribution 📦 to PyPI and TestPyPI
on: push
jobs:
build:
name: Build distribution 📦
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v5
with:
python-version: "3.11"
- name: Install pypa/build
run: >-
python3 -m
pip install
build
--user
- name: Build a binary wheel and a source tarball
run: python3 -m build
- name: Store the distribution packages
uses: actions/upload-artifact@v4
with:
name: python-package-distributions
path: dist/
test-pypi-package:
name: Test the built 📦 package works basically.
runs-on: ubuntu-latest
needs:
- build
steps:
- name: Download all the dists
uses: actions/download-artifact@v4
with:
name: python-package-distributions
path: dist/
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: '3.11'
- name: Test that the basic pip built package runs without error
run: |
set -ex
pip3 install dist/changedetection.io*.whl
changedetection.io -d /tmp -p 10000 &
sleep 3
curl --retry-connrefused --retry 6 http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
curl --retry-connrefused --retry 6 http://127.0.0.1:10000/ >/dev/null
killall changedetection.io
publish-to-pypi:
name: >-
Publish Python 🐍 distribution 📦 to PyPI
if: startsWith(github.ref, 'refs/tags/') # only publish to PyPI on tag pushes
needs:
- test-pypi-package
runs-on: ubuntu-latest
environment:
name: release
url: https://pypi.org/p/changedetection.io
permissions:
id-token: write # IMPORTANT: mandatory for trusted publishing
steps:
- name: Download all the dists
uses: actions/download-artifact@v4
with:
name: python-package-distributions
path: dist/
- name: Publish distribution 📦 to PyPI
uses: pypa/gh-action-pypi-publish@release/v1

View File

@@ -0,0 +1,70 @@
name: ChangeDetection.io Container Build Test
# Triggers the workflow on push or pull request events
# This line doesnt work, even tho it is the documented one
#on: [push, pull_request]
on:
push:
paths:
- requirements.txt
- Dockerfile
- .github/workflows/*
- .github/test/Dockerfile*
pull_request:
paths:
- requirements.txt
- Dockerfile
- .github/workflows/*
- .github/test/Dockerfile*
# Changes to requirements.txt packages and Dockerfile may or may not always be compatible with arm etc, so worth testing
# @todo: some kind of path filter for requirements.txt and Dockerfile
jobs:
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Set up Python 3.11
uses: actions/setup-python@v5
with:
python-version: 3.11
# Just test that the build works, some libraries won't compile on ARM/rPi etc
- name: Set up QEMU
uses: docker/setup-qemu-action@v3
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v3
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
# https://github.com/dgtlmoon/changedetection.io/pull/1067
# Check we can still build under alpine/musl
- name: Test that the docker containers can build (musl via alpine check)
id: docker_build_musl
uses: docker/build-push-action@v6
with:
context: ./
file: ./.github/test/Dockerfile-alpine
platforms: linux/amd64,linux/arm64
- name: Test that the docker containers can build
id: docker_build
uses: docker/build-push-action@v6
# https://github.com/docker/build-push-action#customizing
with:
context: ./
file: ./Dockerfile
platforms: linux/amd64,linux/arm64,linux/arm/v7,linux/arm/v8,linux/arm64/v8
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache

46
.github/workflows/test-only.yml vendored Normal file
View File

@@ -0,0 +1,46 @@
name: ChangeDetection.io App Test
# Triggers the workflow on push or pull request events
on: [push, pull_request]
jobs:
lint-code:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- name: Lint with flake8
run: |
pip3 install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
test-application-3-10:
needs: lint-code
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.10'
test-application-3-11:
needs: lint-code
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.11'
skip-pypuppeteer: true
test-application-3-12:
needs: lint-code
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.12'
skip-pypuppeteer: true
test-application-3-13:
needs: lint-code
uses: ./.github/workflows/test-stack-reusable-workflow.yml
with:
python-version: '3.13'
skip-pypuppeteer: true

View File

@@ -0,0 +1,241 @@
name: ChangeDetection.io App Test
on:
workflow_call:
inputs:
python-version:
description: 'Python version to use'
required: true
type: string
default: '3.10'
skip-pypuppeteer:
description: 'Skip PyPuppeteer (not supported in 3.11/3.12)'
required: false
type: boolean
default: false
jobs:
test-application:
runs-on: ubuntu-latest
env:
PYTHON_VERSION: ${{ inputs.python-version }}
steps:
- uses: actions/checkout@v4
# Mainly just for link/flake8
- name: Set up Python ${{ env.PYTHON_VERSION }}
uses: actions/setup-python@v5
with:
python-version: ${{ env.PYTHON_VERSION }}
- name: Build changedetection.io container for testing under Python ${{ env.PYTHON_VERSION }}
run: |
echo "---- Building for Python ${{ env.PYTHON_VERSION }} -----"
# Build a changedetection.io container and start testing inside
docker build --build-arg PYTHON_VERSION=${{ env.PYTHON_VERSION }} --build-arg LOGGER_LEVEL=TRACE -t test-changedetectionio .
# Debug info
docker run test-changedetectionio bash -c 'pip list'
- name: We should be Python ${{ env.PYTHON_VERSION }} ...
run: |
docker run test-changedetectionio bash -c 'python3 --version'
- name: Spin up ancillary testable services
run: |
docker network create changedet-network
# Selenium
docker run --network changedet-network -d --hostname selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome:4
# SocketPuppetBrowser + Extra for custom browser test
docker run --network changedet-network -d -e "LOG_LEVEL=TRACE" --cap-add=SYS_ADMIN --name sockpuppetbrowser --hostname sockpuppetbrowser --rm -p 3000:3000 dgtlmoon/sockpuppetbrowser:latest
docker run --network changedet-network -d -e "LOG_LEVEL=TRACE" --cap-add=SYS_ADMIN --name sockpuppetbrowser-custom-url --hostname sockpuppetbrowser-custom-url -p 3001:3000 --rm dgtlmoon/sockpuppetbrowser:latest
- name: Spin up ancillary SMTP+Echo message test server
run: |
# Debug SMTP server/echo message back server
docker run --network changedet-network -d -p 11025:11025 -p 11080:11080 --hostname mailserver test-changedetectionio bash -c 'pip3 install aiosmtpd && python changedetectionio/tests/smtp/smtp-test-server.py'
docker ps
- name: Show docker container state and other debug info
run: |
set -x
echo "Running processes in docker..."
docker ps
- name: Run Unit Tests
run: |
# Unit tests
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_notification_diff'
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_watch_model'
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_jinja2_security'
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_semver'
- name: Test built container with Pytest (generally as requests/plaintext fetching)
run: |
# All tests
echo "run test with pytest"
# The default pytest logger_level is TRACE
# To change logger_level for pytest(test/conftest.py),
# append the docker option. e.g. '-e LOGGER_LEVEL=DEBUG'
docker run --name test-cdio-basic-tests --network changedet-network test-changedetectionio bash -c 'cd changedetectionio && ./run_basic_tests.sh'
# PLAYWRIGHT/NODE-> CDP
- name: Playwright and SocketPuppetBrowser - Specific tests in built container
run: |
# Playwright via Sockpuppetbrowser fetch
# tests/visualselector/test_fetch_data.py will do browser steps
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/fetchers/test_content.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/test_errorhandling.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/visualselector/test_fetch_data.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/fetchers/test_custom_js_before_content.py'
- name: Playwright and SocketPuppetBrowser - Headers and requests
run: |
# Settings headers playwright tests - Call back in from Sockpuppetbrowser, check headers
docker run --name "changedet" --hostname changedet --rm -e "FLASK_SERVER_NAME=changedet" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000?dumpio=true" --network changedet-network test-changedetectionio bash -c 'find .; cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 tests/test_request.py; pwd;find .'
- name: Playwright and SocketPuppetBrowser - Restock detection
run: |
# restock detection via playwright - added name=changedet here so that playwright and sockpuppetbrowser can connect to it
docker run --rm --name "changedet" -e "FLASK_SERVER_NAME=changedet" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-port=5004 --live-server-host=0.0.0.0 tests/restock/test_restock.py'
# STRAIGHT TO CDP
- name: Pyppeteer and SocketPuppetBrowser - Specific tests in built container
if: ${{ inputs.skip-pypuppeteer == false }}
run: |
# Playwright via Sockpuppetbrowser fetch
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/fetchers/test_content.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/test_errorhandling.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/visualselector/test_fetch_data.py'
docker run --rm -e "FLASK_SERVER_NAME=cdio" -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network --hostname=cdio test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/fetchers/test_custom_js_before_content.py'
- name: Pyppeteer and SocketPuppetBrowser - Headers and requests checks
if: ${{ inputs.skip-pypuppeteer == false }}
run: |
# Settings headers playwright tests - Call back in from Sockpuppetbrowser, check headers
docker run --name "changedet" --hostname changedet --rm -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "FLASK_SERVER_NAME=changedet" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000?dumpio=true" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/test_request.py'
- name: Pyppeteer and SocketPuppetBrowser - Restock detection
if: ${{ inputs.skip-pypuppeteer == false }}
run: |
# restock detection via playwright - added name=changedet here so that playwright and sockpuppetbrowser can connect to it
docker run --rm --name "changedet" -e "FLASK_SERVER_NAME=changedet" -e "FAST_PUPPETEER_CHROME_FETCHER=True" -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest --live-server-port=5004 --live-server-host=0.0.0.0 --live-server-wait=20 tests/restock/test_restock.py'
# SELENIUM
- name: Specific tests in built container for Selenium
run: |
# Selenium fetch
docker run --rm -e "WEBDRIVER_URL=http://selenium:4444/wd/hub" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py'
- name: Specific tests in built container for headers and requests checks with Selenium
run: |
docker run --name "changedet" --hostname changedet --rm -e "FLASK_SERVER_NAME=changedet" -e "WEBDRIVER_URL=http://selenium:4444/wd/hub" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio; pytest --live-server-host=0.0.0.0 --live-server-port=5004 --live-server-wait=20 tests/test_request.py'
# OTHER STUFF
- name: Test SMTP notification mime types
run: |
# SMTP content types - needs the 'Debug SMTP server/echo message back server' container from above
# "mailserver" hostname defined above
docker run --rm --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/smtp/test_notification_smtp.py'
# @todo Add a test via playwright/puppeteer
# squid with auth is tested in run_proxy_tests.sh -> tests/proxy_list/test_select_custom_proxy.py
- name: Test proxy squid style interaction
run: |
cd changedetectionio
./run_proxy_tests.sh
cd ..
- name: Test proxy SOCKS5 style interaction
run: |
cd changedetectionio
./run_socks_proxy_tests.sh
cd ..
- name: Test custom browser URL
run: |
cd changedetectionio
./run_custom_browser_url_tests.sh
cd ..
- name: Test changedetection.io container starts+runs basically without error
run: |
docker run --name test-changedetectionio -p 5556:5000 -d test-changedetectionio
sleep 3
# Should return 0 (no error) when grep finds it
curl --retry-connrefused --retry 6 -s http://localhost:5556 |grep -q checkbox-uuid
# and IPv6
curl --retry-connrefused --retry 6 -s -g -6 "http://[::1]:5556"|grep -q checkbox-uuid
# Check whether TRACE log is enabled.
# Also, check whether TRACE is came from STDERR
docker logs test-changedetectionio 2>&1 1>/dev/null | grep 'TRACE log is enabled' || exit 1
# Check whether DEBUG is came from STDOUT
docker logs test-changedetectionio 2>/dev/null | grep 'DEBUG' || exit 1
docker kill test-changedetectionio
- name: Test changedetection.io SIGTERM and SIGINT signal shutdown
run: |
echo SIGINT Shutdown request test
docker run --name sig-test -d test-changedetectionio
sleep 3
echo ">>> Sending SIGINT to sig-test container"
docker kill --signal=SIGINT sig-test
sleep 3
# invert the check (it should be not 0/not running)
docker ps
# check signal catch(STDERR) log. Because of
# changedetectionio/__init__.py: logger.add(sys.stderr, level=logger_level)
docker logs sig-test 2>&1 | grep 'Shutdown: Got Signal - SIGINT' || exit 1
test -z "`docker ps|grep sig-test`"
if [ $? -ne 0 ]
then
echo "Looks like container was running when it shouldnt be"
docker ps
exit 1
fi
# @todo - scan the container log to see the right "graceful shutdown" text exists
docker rm sig-test
echo SIGTERM Shutdown request test
docker run --name sig-test -d test-changedetectionio
sleep 3
echo ">>> Sending SIGTERM to sig-test container"
docker kill --signal=SIGTERM sig-test
sleep 3
# invert the check (it should be not 0/not running)
docker ps
# check signal catch(STDERR) log. Because of
# changedetectionio/__init__.py: logger.add(sys.stderr, level=logger_level)
docker logs sig-test 2>&1 | grep 'Shutdown: Got Signal - SIGTERM' || exit 1
test -z "`docker ps|grep sig-test`"
if [ $? -ne 0 ]
then
echo "Looks like container was running when it shouldnt be"
docker ps
exit 1
fi
# @todo - scan the container log to see the right "graceful shutdown" text exists
docker rm sig-test
- name: Dump container log
if: always()
run: |
mkdir output-logs
docker logs test-cdio-basic-tests > output-logs/test-cdio-basic-tests-stdout-${{ env.PYTHON_VERSION }}.txt
docker logs test-cdio-basic-tests 2> output-logs/test-cdio-basic-tests-stderr-${{ env.PYTHON_VERSION }}.txt
- name: Store everything including test-datastore
if: always()
uses: actions/upload-artifact@v4
with:
name: test-cdio-basic-tests-output-py${{ env.PYTHON_VERSION }}
path: .

32
.gitignore vendored
View File

@@ -1,5 +1,29 @@
__pycache__
# Byte-compiled / optimized / DLL files
**/__pycache__
**/*.py[cod]
# Caches
.mypy_cache/
.pytest_cache/
.ruff_cache/
# Distribution / packaging
build/
dist/
*.egg-info*
# Virtual environment
.env
.venv/
venv/
# IDEs
.idea
*.pyc
datastore/url-watches.json
datastore/*
.vscode/settings.json
# Datastore files
datastore/
test-datastore/
# Memory consumption log
test-memory.log

54
COMMERCIAL_LICENCE.md Normal file
View File

@@ -0,0 +1,54 @@
# Generally
In any commercial activity involving 'Hosting' (as defined herein), whether in part or in full, this license must be executed and adhered to.
# Commercial License Agreement
This Commercial License Agreement ("Agreement") is entered into by and between Web Technologies s.r.o. here-in ("Licensor") and (your company or personal name) _____________ ("Licensee"). This Agreement sets forth the terms and conditions under which Licensor provides its software ("Software") and services to Licensee for the purpose of reselling the software either in part or full, as part of any commercial activity where the activity involves a third party.
### Definition of Hosting
For the purposes of this Agreement, "hosting" means making the functionality of the Program or modified version available to third parties as a service. This includes, without limitation:
- Enabling third parties to interact with the functionality of the Program or modified version remotely through a computer network.
- Offering a service the value of which entirely or primarily derives from the value of the Program or modified version.
- Offering a service that accomplishes for users the primary purpose of the Program or modified version.
## 1. Grant of License
Subject to the terms and conditions of this Agreement, Licensor grants Licensee a non-exclusive, non-transferable license to install, use, and resell the Software. Licensee may:
- Resell the Software as part of a service offering or as a standalone product.
- Host the Software on a server and provide it as a hosted service (e.g., Software as a Service - SaaS).
- Integrate the Software into a larger product or service that is then sold or provided for commercial purposes, where the software is used either in part or full.
## 2. License Fees
Licensee agrees to pay Licensor the license fees specified in the ordering document. License fees are due and payable as specified in the ordering document. The fees may include initial licensing costs and recurring fees based on the number of end users, instances of the Software resold, or revenue generated from the resale activities.
## 3. Resale Conditions
Licensee must comply with the following conditions when reselling the Software, whether the software is resold in part or full:
- Provide end users with access to the source code under the same open-source license conditions as provided by Licensor.
- Clearly state in all marketing and sales materials that the Software is provided under a commercial license from Licensor, and provide a link back to https://changedetection.io.
- Ensure end users are aware of and agree to the terms of the commercial license prior to resale.
- Do not sublicense or transfer the Software to third parties except as part of an authorized resale activity.
## 4. Hosting and Provision of Services
Licensee may host the Software (either in part or full) on its servers and provide it as a hosted service to end users. The following conditions apply:
- Licensee must ensure that all hosted versions of the Software comply with the terms of this Agreement.
- Licensee must provide Licensor with regular reports detailing the number of end users and instances of the hosted service.
- Any modifications to the Software made by Licensee for hosting purposes must be made available to end users under the same open-source license conditions, unless agreed otherwise.
## 5. Services
Licensor will provide support and maintenance services as described in the support policy referenced in the ordering document should such an agreement be signed by all parties. Additional fees may apply for support services provided to end users resold by Licensee.
## 6. Reporting and Audits
Licensee agrees to provide Licensor with regular reports detailing the number of instances, end users, and revenue generated from the resale of the Software. Licensor reserves the right to audit Licensees records to ensure compliance with this Agreement.
## 7. Term and Termination
This Agreement shall commence on the effective date and continue for the period set forth in the ordering document unless terminated earlier in accordance with this Agreement. Either party may terminate this Agreement if the other party breaches any material term and fails to cure such breach within thirty (30) days after receipt of written notice.
## 8. Limitation of Liability and Disclaimer of Warranty
Executing this commercial license does not waive the Limitation of Liability or Disclaimer of Warranty as stated in the open-source LICENSE provided with the Software. The Software is provided "as is," without warranty of any kind, express or implied, including but not limited to the warranties of merchantability, fitness for a particular purpose, and noninfringement. In no event shall the authors or copyright holders be liable for any claim, damages, or other liability, whether in an action of contract, tort, or otherwise, arising from, out of, or in connection with the Software or the use or other dealings in the Software.
## 9. Governing Law
This Agreement shall be governed by and construed in accordance with the laws of the Czech Republic.
## Contact Information
For commercial licensing inquiries, please contact contact@changedetection.io and dgtlmoon@gmail.com.

9
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,9 @@
Contributing is always welcome!
I am no professional flask developer, if you know a better way that something can be done, please let me know!
Otherwise, it's always best to PR into the `master` branch.
Please be sure that all new functionality has a matching test!
Use `pytest` to validate/test, you can run the existing tests as `pytest tests/test_notification.py` for example

View File

@@ -1,24 +1,79 @@
FROM python:3.8-slim
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# pip dependencies install stage
# @NOTE! I would love to move to 3.11 but it breaks the async handler in changedetectionio/content_fetchers/puppeteer.py
# If you know how to fix it, please do! and test it for both 3.10 and 3.11
ARG PYTHON_VERSION=3.11
FROM python:${PYTHON_VERSION}-slim-bookworm AS builder
# See `cryptography` pin comment in requirements.txt
ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1
RUN apt-get update && apt-get install -y --no-install-recommends \
g++ \
gcc \
libc-dev \
libffi-dev \
libjpeg-dev \
libssl-dev \
libxslt-dev \
make \
zlib1g-dev
RUN mkdir /install
WORKDIR /install
COPY requirements.txt /requirements.txt
# --extra-index-url https://www.piwheels.org/simple is for cryptography module to be prebuilt (or rustc etc needs to be installed)
RUN pip install --extra-index-url https://www.piwheels.org/simple --target=/dependencies -r /requirements.txt
# Playwright is an alternative to Selenium
# Excluded this package from requirements.txt to prevent arm/v6 and arm/v7 builds from failing
# https://github.com/dgtlmoon/changedetection.io/pull/1067 also musl/alpine (not supported)
RUN pip install --target=/dependencies playwright~=1.48.0 \
|| echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled."
# Final image stage
FROM python:${PYTHON_VERSION}-slim-bookworm
LABEL org.opencontainers.image.source="https://github.com/dgtlmoon/changedetection.io"
RUN apt-get update && apt-get install -y --no-install-recommends \
libxslt1.1 \
# For presenting price amounts correctly in the restock/price detection overview
locales \
# For pdftohtml
poppler-utils \
zlib1g \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
COPY backend /app
WORKDIR /app
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
# Attempt to store the triggered commit
ARG SOURCE_COMMIT
ARG SOURCE_BRANCH
RUN echo "commit: $SOURCE_COMMIT branch: $SOURCE_BRANCH" >/source.txt
RUN [ ! -d "/datastore" ] && mkdir /datastore
CMD [ "python", "./backend.py" ]
# Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
RUN sed -i 's/^CipherString = .*/CipherString = DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf
# Copy modules over to the final image and add their dir to PYTHONPATH
COPY --from=builder /dependencies /usr/local
ENV PYTHONPATH=/usr/local
EXPOSE 5000
# The actual flask app module
COPY changedetectionio /app/changedetectionio
# Starting wrapper
COPY changedetection.py /app/changedetection.py
# Github Action test purpose(test-only.yml).
# On production, it is effectively LOGGER_LEVEL=''.
ARG LOGGER_LEVEL=''
ENV LOGGER_LEVEL "$LOGGER_LEVEL"
WORKDIR /app
CMD ["python", "./changedetection.py", "-d", "/datastore"]

201
LICENSE Normal file
View File

@@ -0,0 +1,201 @@
Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
and distribution as defined by Sections 1 through 9 of this document.
"Licensor" shall mean the copyright owner or entity authorized by
the copyright owner that is granting the License.
"Legal Entity" shall mean the union of the acting entity and all
other entities that control, are controlled by, or are under common
control with that entity. For the purposes of this definition,
"control" means (i) the power, direct or indirect, to cause the
direction or management of such entity, whether by contract or
otherwise, or (ii) ownership of fifty percent (50%) or more of the
outstanding shares, or (iii) beneficial ownership of such entity.
"You" (or "Your") shall mean an individual or Legal Entity
exercising permissions granted by this License.
"Source" form shall mean the preferred form for making modifications,
including but not limited to software source code, documentation
source, and configuration files.
"Object" form shall mean any form resulting from mechanical
transformation or translation of a Source form, including but
not limited to compiled object code, generated documentation,
and conversions to other media types.
"Work" shall mean the work of authorship, whether in Source or
Object form, made available under the License, as indicated by a
copyright notice that is included in or attached to the work
(an example is provided in the Appendix below).
"Derivative Works" shall mean any work, whether in Source or Object
form, that is based on (or derived from) the Work and for which the
editorial revisions, annotations, elaborations, or other modifications
represent, as a whole, an original work of authorship. For the purposes
of this License, Derivative Works shall not include works that remain
separable from, or merely link (or bind by name) to the interfaces of,
the Work and Derivative Works thereof.
"Contribution" shall mean any work of authorship, including
the original version of the Work and any modifications or additions
to that Work or Derivative Works thereof, that is intentionally
submitted to Licensor for inclusion in the Work by the copyright owner
or by an individual or Legal Entity authorized to submit on behalf of
the copyright owner. For the purposes of this definition, "submitted"
means any form of electronic, verbal, or written communication sent
to the Licensor or its representatives, including but not limited to
communication on electronic mailing lists, source code control systems,
and issue tracking systems that are managed by, or on behalf of, the
Licensor for the purpose of discussing and improving the Work, but
excluding communication that is conspicuously marked or otherwise
designated in writing by the copyright owner as "Not a Contribution."
"Contributor" shall mean Licensor and any individual or Legal Entity
on behalf of whom a Contribution has been received by Licensor and
subsequently incorporated within the Work.
2. Grant of Copyright License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
copyright license to reproduce, prepare Derivative Works of,
publicly display, publicly perform, sublicense, and distribute the
Work and such Derivative Works in Source or Object form.
3. Grant of Patent License. Subject to the terms and conditions of
this License, each Contributor hereby grants to You a perpetual,
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
(except as stated in this section) patent license to make, have made,
use, offer to sell, sell, import, and otherwise transfer the Work,
where such license applies only to those patent claims licensable
by such Contributor that are necessarily infringed by their
Contribution(s) alone or by combination of their Contribution(s)
with the Work to which such Contribution(s) was submitted. If You
institute patent litigation against any entity (including a
cross-claim or counterclaim in a lawsuit) alleging that the Work
or a Contribution incorporated within the Work constitutes direct
or contributory patent infringement, then any patent licenses
granted to You under this License for that Work shall terminate
as of the date such litigation is filed.
4. Redistribution. You may reproduce and distribute copies of the
Work or Derivative Works thereof in any medium, with or without
modifications, and in Source or Object form, provided that You
meet the following conditions:
(a) You must give any other recipients of the Work or
Derivative Works a copy of this License; and
(b) You must cause any modified files to carry prominent notices
stating that You changed the files; and
(c) You must retain, in the Source form of any Derivative Works
that You distribute, all copyright, patent, trademark, and
attribution notices from the Source form of the Work,
excluding those notices that do not pertain to any part of
the Derivative Works; and
(d) If the Work includes a "NOTICE" text file as part of its
distribution, then any Derivative Works that You distribute must
include a readable copy of the attribution notices contained
within such NOTICE file, excluding those notices that do not
pertain to any part of the Derivative Works, in at least one
of the following places: within a NOTICE text file distributed
as part of the Derivative Works; within the Source form or
documentation, if provided along with the Derivative Works; or,
within a display generated by the Derivative Works, if and
wherever such third-party notices normally appear. The contents
of the NOTICE file are for informational purposes only and
do not modify the License. You may add Your own attribution
notices within Derivative Works that You distribute, alongside
or as an addendum to the NOTICE text from the Work, provided
that such additional attribution notices cannot be construed
as modifying the License.
You may add Your own copyright statement to Your modifications and
may provide additional or different license terms and conditions
for use, reproduction, or distribution of Your modifications, or
for any such Derivative Works as a whole, provided Your use,
reproduction, and distribution of the Work otherwise complies with
the conditions stated in this License.
5. Submission of Contributions. Unless You explicitly state otherwise,
any Contribution intentionally submitted for inclusion in the Work
by You to the Licensor shall be under the terms and conditions of
this License, without any additional terms or conditions.
Notwithstanding the above, nothing herein shall supersede or modify
the terms of any separate license agreement you may have executed
with Licensor regarding such Contributions.
6. Trademarks. This License does not grant permission to use the trade
names, trademarks, service marks, or product names of the Licensor,
except as required for reasonable and customary use in describing the
origin of the Work and reproducing the content of the NOTICE file.
7. Disclaimer of Warranty. Unless required by applicable law or
agreed to in writing, Licensor provides the Work (and each
Contributor provides its Contributions) on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
implied, including, without limitation, any warranties or conditions
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
PARTICULAR PURPOSE. You are solely responsible for determining the
appropriateness of using or redistributing the Work and assume any
risks associated with Your exercise of permissions under this License.
8. Limitation of Liability. In no event and under no legal theory,
whether in tort (including negligence), contract, or otherwise,
unless required by applicable law (such as deliberate and grossly
negligent acts) or agreed to in writing, shall any Contributor be
liable to You for damages, including any direct, indirect, special,
incidental, or consequential damages of any character arising as a
result of this License or out of the use or inability to use the
Work (including but not limited to damages for loss of goodwill,
work stoppage, computer failure or malfunction, or any and all
other commercial damages or losses), even if such Contributor
has been advised of the possibility of such damages.
9. Accepting Warranty or Additional Liability. While redistributing
the Work or Derivative Works thereof, You may choose to offer,
and charge a fee for, acceptance of support, warranty, indemnity,
or other liability obligations and/or rights consistent with this
License. However, in accepting such obligations, You may act only
on Your own behalf and on Your sole responsibility, not on behalf
of any other Contributor, and only if You agree to indemnify,
defend, and hold each Contributor harmless for any liability
incurred by, or claims asserted against, such Contributor by reason
of your accepting any such warranty or additional liability.
END OF TERMS AND CONDITIONS
APPENDIX: How to apply the Apache License to your work.
To apply the Apache License to your work, attach the following
boilerplate notice, with the fields enclosed by brackets "[]"
replaced with your own identifying information. (Don't include
the brackets!) The text should be enclosed in the appropriate
comment syntax for the file format. We also recommend that a
file or class name and description of purpose be included on the
same "printed page" as the copyright notice for easier
identification within third-party archives.
Copyright [yyyy] [name of copyright owner]
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.

22
MANIFEST.in Normal file
View File

@@ -0,0 +1,22 @@
recursive-include changedetectionio/api *
recursive-include changedetectionio/apprise_plugin *
recursive-include changedetectionio/blueprint *
recursive-include changedetectionio/content_fetchers *
recursive-include changedetectionio/model *
recursive-include changedetectionio/processors *
recursive-include changedetectionio/static *
recursive-include changedetectionio/templates *
recursive-include changedetectionio/tests *
prune changedetectionio/static/package-lock.json
prune changedetectionio/static/styles/node_modules
prune changedetectionio/static/styles/package-lock.json
include changedetection.py
include requirements.txt
include README-pip.md
global-exclude *.pyc
global-exclude node_modules
global-exclude venv
global-exclude test-datastore
global-exclude changedetection.io*dist-info
global-exclude changedetectionio/tests/proxy_socks5/test-datastore

99
README-pip.md Normal file
View File

@@ -0,0 +1,99 @@
## Web Site Change Detection, Monitoring and Notification.
Live your data-life pro-actively, track website content changes and receive notifications via Discord, Email, Slack, Telegram and 70+ more
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring, list of websites with changes" title="Self-hosted web page change monitoring, list of websites with changes" />](https://changedetection.io)
[**Don't have time? Let us host it for you! try our extremely affordable subscription use our proxies and support!**](https://changedetection.io)
### Target specific parts of the webpage using the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Select parts and elements of a web page to monitor for changes" title="Select parts and elements of a web page to monitor for changes" />](https://changedetection.io?src=pip)
### Easily see what changed, examine by word, line, or individual character.
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://changedetection.io?src=pip)
### Perform interactive browser steps
Fill in text boxes, click buttons and more, setup your changedetection scenario.
Using the **Browser Steps** configuration, add basic steps before performing change detection, such as logging into websites, adding a product to a cart, accept cookie logins, entering dates and refining searches.
[<img src="docs/browsersteps-anim.gif" style="max-width:100%;" alt="Website change detection with interactive browser steps, detect changes behind login and password, search queries and more" title="Website change detection with interactive browser steps, detect changes behind login and password, search queries and more" />](https://changedetection.io?src=pip)
After **Browser Steps** have been run, then visit the **Visual Selector** tab to refine the content you're interested in.
Requires Playwright to be enabled.
### Example use cases
- Products and services have a change in pricing
- _Out of stock notification_ and _Back In stock notification_
- Monitor and track PDF file changes, know when a PDF file has text changes.
- Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list.
- Festivals with changes
- Discogs restock alerts and monitoring
- Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites
- University/organisation news from their website
- Detect and monitor changes in JSON API responses
- JSON API monitoring and alerting
- Changes in legal and other documents
- Trigger API calls via notifications when text appears on a website
- Glue together APIs using the JSON filter and JSON notifications
- Create RSS feeds based on changes in web content
- Monitor HTML source code for unexpected changes, strengthen your PCI compliance
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
- Get notified when certain keywords appear in Twitter search results
- Proactively search for jobs, get notified when companies update their careers page, search job portals for keywords.
- Get alerts when new job positions are open on Bamboo HR and other job platforms
- Website defacement monitoring
- Pokémon Card Restock Tracker / Pokémon TCG Tracker
- RegTech - stay ahead of regulatory changes, regulatory compliance
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_
#### Key Features
- Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions!
- Target elements with xPath(1.0) and CSS Selectors, Easily monitor complex JSON with JSONPath or jq
- Switch between fast non-JS and Chrome JS based "fetchers"
- Track changes in PDF files (Monitor text changed in the PDF, Also monitor PDF filesize and checksums)
- Easily specify how often a site should be checked
- Execute JS before extracting text (Good for logging in, see examples in the UI!)
- Override Request Headers, Specify `POST` or `GET` and other methods
- Use the "Visual Selector" to help target specific elements
- Configurable [proxy per watch](https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration)
- Send a screenshot with the notification when a change is detected in the web page
We [recommend and use Bright Data](https://brightdata.grsm.io/n0r16zf7eivq) global proxy services, Bright Data will match any first deposit up to $100 using our signup link.
[Oxylabs](https://oxylabs.go2cloud.org/SH2d) is also an excellent proxy provider and well worth using, they offer Residental, ISP, Rotating and many other proxy types to suit your project.
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
```bash
$ pip3 install changedetection.io
```
Specify a target for the *datastore path* with `-d` (required) and a *listening port* with `-p` (defaults to `5000`)
```bash
$ changedetection.io -d /path/to/empty/data/dir -p 5000
```
Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
See https://changedetection.io for more information.

312
README.md
View File

@@ -1,58 +1,310 @@
# changedetection.io
## Web Site Change Detection, Restock monitoring and notifications.
## Self-hosted change monitoring of web pages.
**_Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more._**
_Know when web pages change! Stay ontop of new information!_
_Live your data-life pro-actively._
#### Example use cases
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web site page change monitoring" title="Self-hosted web site page change monitoring" />](https://changedetection.io?src=github)
Know when ...
[![Release Version][release-shield]][release-link] [![Docker Pulls][docker-pulls]][docker-link] [![License][license-shield]](LICENSE.md)
- Government department updates (changes are often only on their websites)
- Local government news (changes are often only on their websites)
- New software releases
![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master)
[**Get started with website page change monitoring straight away. Don't have time? Try our $8.99/month subscription, use our proxies and support!**](https://changedetection.io) , _half the price of other website change monitoring services!_
- Chrome browser included.
- Nothing to install, access via browser login after signup.
- Super fast, no registration needed setup.
- Get started watching and receiving website change notifications straight away.
- See our [tutorials and how-to page for more inspiration](https://changedetection.io/tutorials)
### Target specific parts of the webpage using the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Select parts and elements of a web page to monitor for changes" title="Select parts and elements of a web page to monitor for changes" />](https://changedetection.io?src=github)
### Easily see what changed, examine by word, line, or individual character.
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://changedetection.io?src=github)
### Perform interactive browser steps
Fill in text boxes, click buttons and more, setup your changedetection scenario.
Using the **Browser Steps** configuration, add basic steps before performing change detection, such as logging into websites, adding a product to a cart, accept cookie logins, entering dates and refining searches.
[<img src="docs/browsersteps-anim.gif" style="max-width:100%;" alt="Website change detection with interactive browser steps, detect changes behind login and password, search queries and more" title="Website change detection with interactive browser steps, detect changes behind login and password, search queries and more" />](https://changedetection.io?src=github)
After **Browser Steps** have been run, then visit the **Visual Selector** tab to refine the content you're interested in.
Requires Playwright to be enabled.
### Awesome restock and price change notifications
Enable the _"Re-stock & Price detection for single product pages"_ option to activate the best way to monitor product pricing, this will extract any meta-data in the HTML page and give you many options to follow the pricing of the product.
Easily organise and monitor prices for products from the dashboard, get alerts and notifications when the price of a product changes or comes back in stock again!
[<img src="docs/restock-overview.png" style="max-width:100%;" alt="Easily keep an eye on product price changes directly from the UI" title="Easily keep an eye on product price changes directly from the UI" />](https://changedetection.io?src=github)
Set price change notification parameters, upper and lower price, price change percentage and more.
Always know when a product for sale drops in price.
[<img src="docs/restock-settings.png" style="max-width:100%;" alt="Set upper lower and percentage price change notification values" title="Set upper lower and percentage price change notification values" />](https://changedetection.io?src=github)
### Example use cases
- Products and services have a change in pricing
- _Out of stock notification_ and _Back In stock notification_
- Monitor and track PDF file changes, know when a PDF file has text changes.
- Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list.
- Festivals with changes
- Discogs restock alerts and monitoring
- Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites
- University/organisation news from their website
- Detect and monitor changes in JSON API responses
- JSON API monitoring and alerting
- Changes in legal and other documents
- Trigger API calls via notifications when text appears on a website
- Glue together APIs using the JSON filter and JSON notifications
- Create RSS feeds based on changes in web content
- Monitor HTML source code for unexpected changes, strengthen your PCI compliance
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
- Get notified when certain keywords appear in Twitter search results
- Proactively search for jobs, get notified when companies update their careers page, search job portals for keywords.
- Get alerts when new job positions are open on Bamboo HR and other job platforms
- Website defacement monitoring
- Pokémon Card Restock Tracker / Pokémon TCG Tracker
- RegTech - stay ahead of regulatory changes, regulatory compliance
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_
**Get monitoring now! super simple, one command!**
#### Key Features
- Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions!
- Target elements with xPath(1.0) and CSS Selectors, Easily monitor complex JSON with JSONPath or jq
- Switch between fast non-JS and Chrome JS based "fetchers"
- Track changes in PDF files (Monitor text changed in the PDF, Also monitor PDF filesize and checksums)
- Easily specify how often a site should be checked
- Execute JS before extracting text (Good for logging in, see examples in the UI!)
- Override Request Headers, Specify `POST` or `GET` and other methods
- Use the "Visual Selector" to help target specific elements
- Configurable [proxy per watch](https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration)
- Send a screenshot with the notification when a change is detected in the web page
We [recommend and use Bright Data](https://brightdata.grsm.io/n0r16zf7eivq) global proxy services, Bright Data will match any first deposit up to $100 using our signup link.
[Oxylabs](https://oxylabs.go2cloud.org/SH2d) is also an excellent proxy provider and well worth using, they offer Residental, ISP, Rotating and many other proxy types to suit your project.
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
### Schedule web page watches in any timezone, limit by day of week and time.
Easily set a re-check schedule, for example you could limit the web page change detection to only operate during business hours.
Or perhaps based on a foreign timezone (for example, you want to check for the latest news-headlines in a foreign country at 0900 AM),
<img src="./docs/scheduler.png" style="max-width:80%;" alt="How to monitor web page changes according to a schedule" title="How to monitor web page changes according to a schedule" />
Includes quick short-cut buttons to setup a schedule for **business hours only**, or **weekends**.
### We have a Chrome extension!
Easily add the current web page to your changedetection.io tool, simply install the extension and click "Sync" to connect it to your existing changedetection.io install.
[<img src="./docs/chrome-extension-screenshot.png" style="max-width:80%;" alt="Chrome Extension to easily add the current web-page to detect a change." title="Chrome Extension to easily add the current web-page to detect a change." />](https://chromewebstore.google.com/detail/changedetectionio-website/kefcfmgmlhmankjmnbijimhofdjekbop)
[Goto the Chrome Webstore to download the extension.](https://chromewebstore.google.com/detail/changedetectionio-website/kefcfmgmlhmankjmnbijimhofdjekbop) ( Or check out the [GitHub repo](https://github.com/dgtlmoon/changedetection.io-browser-extension) )
## Installation
### Docker
With Docker composer, just clone this repository and..
```bash
docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
$ docker compose up -d
```
Now visit http://127.0.0.1:5000 , You should now be able to access the UI.
Docker standalone
```bash
$ docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
#### Updating to latest version
`:latest` tag is our latest stable release, `:dev` tag is our bleeding edge `master` branch.
Highly recommended :)
Alternative docker repository over at ghcr - [ghcr.io/dgtlmoon/changedetection.io](https://ghcr.io/dgtlmoon/changedetection.io)
### Windows
See the install instructions at the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Microsoft-Windows
### Python Pip
Check out our pypi page https://pypi.org/project/changedetection.io/
```bash
$ pip3 install changedetection.io
$ changedetection.io -d /path/to/empty/data/dir -p 5000
```
Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
_Now with per-site configurable support for using a fast built in HTTP fetcher or use a Chrome based fetcher for monitoring of JavaScript websites!_
## Updating changedetection.io
### Docker
```
docker pull dgtlmoon/changedetection.io
docker kill $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker rm $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker kill $(docker ps -a -f name=changedetection.io -q)
docker rm $(docker ps -a -f name=changedetection.io -q)
docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
### Screenshots
Application running.
### docker compose
![Self-hosted web page change monitoring application screenshot](screenshot.png?raw=true "Self-hosted web page change monitoring screenshot")
```bash
docker compose pull && docker compose up -d
```
Examining differences in content.
See the wiki for more information https://github.com/dgtlmoon/changedetection.io/wiki
![Self-hosted web page change monitoring context difference screenshot](screenshot-diff.png?raw=true "Self-hosted web page change monitoring context difference screenshot")
### Future plans
## Filters
- Greater configuration of check interval times, page request headers.
- ~~General options for timeout, default headers~~
- On change detection, callout to another API (handy for notices/issue trackers)
- ~~Explore the differences that were detected~~
- Add more options to explore versions of differences
- Use a graphic/rendered page difference instead of text (see the experimental `selenium-screenshot-diff` branch)
XPath(1.0), JSONPath, jq, and CSS support comes baked in! You can be as specific as you need, use XPath exported from various XPath element query creation tools.
(We support LXML `re:test`, `re:match` and `re:replace`.)
## Notifications
ChangeDetection.io supports a massive amount of notifications (including email, office365, custom APIs, etc) when a web-page has a change detected thanks to the <a href="https://github.com/caronc/apprise">apprise</a> library.
Simply set one or more notification URL's in the _[edit]_ tab of that watch.
Just some examples
discord://webhook_id/webhook_token
flock://app_token/g:channel_id
gitter://token/room
gchat://workspace/key/token
msteams://TokenA/TokenB/TokenC/
o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail
rocket://user:password@hostname/#Channel
mailto://user:pass@example.com?to=receivingAddress@example.com
json://someserver.com/custom-api
syslog://
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
Now you can also customise your notification content and use <a target="_new" href="https://jinja.palletsprojects.com/en/3.0.x/templates/">Jinja2 templating</a> for their title and body!
## JSON API Monitoring
Detect changes and monitor data in JSON API's by using either JSONPath or jq to filter, parse, and restructure JSON as needed.
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-filter-field-example.png)
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-diff-example.png)
### JSONPath or jq?
For more complex parsing, filtering, and modifying of JSON data, jq is recommended due to the built-in operators and functions. Refer to the [documentation](https://stedolan.github.io/jq/manual/) for more specific information on jq.
One big advantage of `jq` is that you can use logic in your JSON filter, such as filters to only show items that have a value greater than/less than etc.
See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/JSON-Selector-Filter-help for more information and examples
### Parse JSON embedded in HTML!
When you enable a `json:` or `jq:` filter, you can even automatically extract and parse embedded JSON inside a HTML page! Amazingly handy for sites that build content based on JSON, such as many e-commerce websites.
```
<html>
...
<script type="application/ld+json">
{
"@context":"http://schema.org/",
"@type":"Product",
"offers":{
"@type":"Offer",
"availability":"http://schema.org/InStock",
"price":"3949.99",
"priceCurrency":"USD",
"url":"https://www.newegg.com/p/3D5-000D-001T1"
},
"description":"Cobratype King Cobra Hero Desktop Gaming PC",
"name":"Cobratype King Cobra Hero Desktop Gaming PC",
"sku":"3D5-000D-001T1",
"itemCondition":"NewCondition"
}
</script>
```
`json:$..price` or `jq:..price` would give `3949.99`, or you can extract the whole structure (use a JSONpath test website to validate with)
The application also supports notifying you that it can follow this information automatically
## Proxy Configuration
See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration , we also support using [Bright Data proxy services where possible](https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#brightdata-proxy-support) and [Oxylabs](https://oxylabs.go2cloud.org/SH2d) proxy services.
## Raspberry Pi support?
Raspberry Pi and linux/arm/v6 linux/arm/v7 arm64 devices are supported! See the wiki for [details](https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver)
## Import support
Easily [import your list of websites to watch for changes in Excel .xslx file format](https://changedetection.io/tutorial/how-import-your-website-change-detection-lists-excel), or paste in lists of website URLs as plaintext.
Excel import is recommended - that way you can better organise tags/groups of websites and other features.
## API Support
Supports managing the website watch list [via our API](https://changedetection.io/docs/api_v1/index.html)
## Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Consider taking out an officially supported [website change detection subscription](https://changedetection.io?src=github) , even if you don't use it, you still get the warm fuzzy feeling of helping out the project. (And who knows, you might just use it!)
## Commercial Support
I offer commercial support, this software is depended on by network security, aerospace , data-science and data-journalist professionals just to name a few, please reach out at dgtlmoon@gmail.com for any enquiries, I am more than glad to work with your organisation to further the possibilities of what can be done with changedetection.io
[release-shield]: https://img.shields.io:/github/v/release/dgtlmoon/changedetection.io?style=for-the-badge
[docker-pulls]: https://img.shields.io/docker/pulls/dgtlmoon/changedetection.io?style=for-the-badge
[test-shield]: https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master
[license-shield]: https://img.shields.io/github/license/dgtlmoon/changedetection.io.svg?style=for-the-badge
[release-link]: https://github.com/dgtlmoon/changedetection.io/releases
[docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io
## Commercial Licencing
If you are reselling this software either in part or full as part of any commercial arrangement, you must abide by our COMMERCIAL_LICENCE.md found in our code repository, please contact dgtlmoon@gmail.com and contact@changedetection.io .
## Third-party licenses
changedetectionio.html_tools.elementpath_tostring: Copyright (c), 2018-2021, SISSA (Scuola Internazionale Superiore di Studi Avanzati), Licensed under [MIT license](https://github.com/sissaschool/elementpath/blob/master/LICENSE)
## Contributors
Recognition of fantastic contributors to the project
- Constantin Hong https://github.com/Constantin1489

View File

@@ -1,489 +0,0 @@
#!/usr/bin/python3
# @todo logging
# @todo sort by last_changed
# @todo extra options for url like , verify=False etc.
# @todo enable https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl as option?
# @todo maybe a button to reset all 'last-changed'.. so you can see it clearly when something happens since your last visit
# @todo option for interval day/6 hour/etc
# @todo on change detected, config for calling some API
# @todo make tables responsive!
# @todo fetch title into json
# https://distill.io/features
# proxy per check
#i
import json
import eventlet
import eventlet.wsgi
import time
import os
import getopt
import sys
import datetime
import timeago
import threading
import queue
from flask import Flask, render_template, request, send_file, send_from_directory, safe_join, abort, redirect, url_for
# Local
import store
running_update_threads = []
ticker_thread = None
datastore = store.ChangeDetectionStore()
messages = []
extra_stylesheets = []
update_q = queue.Queue()
app = Flask(__name__, static_url_path='/static')
app.config['STATIC_RESOURCES'] = "/app/static"
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
# app.config['SECRET_KEY'] = 'secret!'
# Disables caching of the templates
app.config['TEMPLATES_AUTO_RELOAD'] = True
# We use the whole watch object from the store/JSON so we can see if there's some related status in terms of a thread
# running or something similar.
@app.template_filter('format_last_checked_time')
def _jinja2_filter_datetime(watch_obj, format="%Y-%m-%d %H:%M:%S"):
# Worker thread tells us which UUID it is currently processing.
for t in running_update_threads:
if t.current_uuid == watch_obj['uuid']:
return "Checking now.."
if watch_obj['last_checked'] == 0:
return 'Not yet'
return timeago.format(int(watch_obj['last_checked']), time.time())
# @app.context_processor
# def timeago():
# def _timeago(lower_time, now):
# return timeago.format(lower_time, now)
# return dict(timeago=_timeago)
@app.template_filter('format_timestamp_timeago')
def _jinja2_filter_datetimestamp(timestamp, format="%Y-%m-%d %H:%M:%S"):
if timestamp == 0:
return 'Not yet'
return timeago.format(timestamp, time.time())
# return timeago.format(timestamp, time.time())
# return datetime.datetime.utcfromtimestamp(timestamp).strftime(format)
@app.route("/", methods=['GET'])
def main_page():
global messages
limit_tag = request.args.get('tag')
# Sort by last_changed and add the uuid which is usually the key..
sorted_watches = []
for uuid, watch in datastore.data['watching'].items():
if limit_tag != None:
# Support for comma separated list of tags.
for tag_in_watch in watch['tag'].split(','):
tag_in_watch = tag_in_watch.strip()
if tag_in_watch == limit_tag:
watch['uuid'] = uuid
sorted_watches.append(watch)
else:
watch['uuid'] = uuid
sorted_watches.append(watch)
sorted_watches.sort(key=lambda x: x['last_changed'], reverse=True)
existing_tags = datastore.get_all_tags()
output = render_template("watch-overview.html",
watches=sorted_watches,
messages=messages,
tags=existing_tags,
active_tag=limit_tag)
# Show messages but once.
messages = []
return output
@app.route("/scrub", methods=['GET', 'POST'])
def scrub_page():
from pathlib import Path
global messages
if request.method == 'POST':
confirmtext = request.form.get('confirmtext')
if confirmtext == 'scrub':
for txt_file_path in Path('/datastore').rglob('*.txt'):
os.unlink(txt_file_path)
for uuid, watch in datastore.data['watching'].items():
watch['last_checked'] = 0
watch['last_changed'] = 0
watch['previous_md5'] = None
watch['history'] = {}
datastore.needs_write = True
messages.append({'class': 'ok', 'message': 'Cleaned all version history.'})
else:
messages.append({'class': 'error', 'message': 'Wrong confirm text.'})
return redirect(url_for('main_page'))
return render_template("scrub.html")
@app.route("/edit", methods=['GET', 'POST'])
def edit_page():
global messages
import validators
if request.method == 'POST':
uuid = request.args.get('uuid')
url = request.form.get('url').strip()
tag = request.form.get('tag').strip()
form_headers = request.form.get('headers').strip().split("\n")
extra_headers = {}
if form_headers:
for header in form_headers:
if len(header):
parts = header.split(':', 1)
extra_headers.update({parts[0].strip(): parts[1].strip()})
validators.url(url) # @todo switch to prop/attr/observer
datastore.data['watching'][uuid].update({'url': url,
'tag': tag,
'headers': extra_headers})
datastore.needs_write = True
messages.append({'class': 'ok', 'message': 'Updated watch.'})
return redirect(url_for('main_page'))
else:
uuid = request.args.get('uuid')
output = render_template("edit.html", uuid=uuid, watch=datastore.data['watching'][uuid], messages=messages)
return output
@app.route("/settings", methods=['GET', "POST"])
def settings_page():
global messages
if request.method == 'POST':
try:
minutes = int(request.values.get('minutes').strip())
except ValueError:
messages.append({'class': 'error', 'message': "Invalid value given, use an integer."})
else:
if minutes >= 5 and minutes <= 600:
datastore.data['settings']['requests']['minutes_between_check'] = minutes
datastore.needs_write = True
messages.append({'class': 'ok', 'message': "Updated"})
else:
messages.append({'class': 'error', 'message': "Must be equal to or greater than 5 and less than 600 minutes"})
output = render_template("settings.html", messages=messages, minutes=datastore.data['settings']['requests']['minutes_between_check'])
messages =[]
return output
@app.route("/import", methods=['GET', "POST"])
def import_page():
import validators
global messages
remaining_urls=[]
good = 0
if request.method == 'POST':
urls = request.values.get('urls').split("\n")
for url in urls:
url = url.strip()
if len(url) and validators.url(url):
datastore.add_watch(url=url.strip(), tag="")
good += 1
else:
if len(url):
remaining_urls.append(url)
messages.append({'class': 'ok', 'message': "{} Imported, {} Skipped.".format(good, len(remaining_urls))})
output = render_template("import.html",
messages=messages,
remaining="\n".join(remaining_urls)
)
messages = []
return output
@app.route("/diff/<string:uuid>", methods=['GET'])
def diff_history_page(uuid):
global messages
extra_stylesheets=['/static/css/diff.css']
watch = datastore.data['watching'][uuid]
dates = list(watch['history'].keys())
# Convert to int, sort and back to str again
dates = [int(i) for i in dates]
dates.sort(reverse=True)
dates = [str(i) for i in dates]
newest_file = watch['history'][dates[0]]
with open(newest_file, 'r') as f:
newest_version_file_contents = f.read()
previous_version = request.args.get('previous_version')
try:
previous_file = watch['history'][previous_version]
except KeyError:
# Not present, use a default value, the second one in the sorted list.
previous_file = watch['history'][dates[1]]
with open(previous_file, 'r') as f:
previous_version_file_contents = f.read()
output = render_template("diff.html", watch_a=watch,
messages=messages,
newest=newest_version_file_contents,
previous=previous_version_file_contents,
extra_stylesheets=extra_stylesheets,
versions=dates[1:],
newest_version_timestamp=dates[0],
current_previous_version=str(previous_version),
current_diff_url=watch['url'])
return output
@app.route("/favicon.ico", methods=['GET'])
def favicon():
return send_from_directory("/app/static/images", filename="favicon.ico")
# We're good but backups are even better!
@app.route("/backup", methods=['GET'])
def get_backup():
import zipfile
from pathlib import Path
import zlib
# create a ZipFile object
backupname = "changedetection-backup-{}.zip".format(int(time.time()))
# We only care about UUIDS from the current index file
uuids = list(datastore.data['watching'].keys())
with zipfile.ZipFile(os.path.join("/datastore", backupname), 'w', compression=zipfile.ZIP_DEFLATED,
compresslevel=6) as zipObj:
# Be sure we're written fresh
datastore.sync_to_json()
# Add the index
zipObj.write(os.path.join("/datastore", "url-watches.json"))
# Add any snapshot data we find
for txt_file_path in Path('/datastore').rglob('*.txt'):
parent_p = txt_file_path.parent
if parent_p.name in uuids:
zipObj.write(txt_file_path)
return send_file(os.path.join("/datastore", backupname),
as_attachment=True,
mimetype="application/zip",
attachment_filename=backupname)
# A few self sanity checks, mostly for developer/bug check
@app.route("/self-check", methods=['GET'])
def selfcheck():
output = "All fine"
# In earlier versions before a single threaded write of the JSON store, sometimes histories could get mixed.
# Could also maybe affect people who manually fiddle with their JSON store?
for uuid, watch in datastore.data['watching'].items():
for timestamp, path in watch['history'].items():
# Each history snapshot should include a full path, which contains the {uuid}
if not uuid in path:
output = "Something weird in {}, suspected incorrect snapshot path.".format(uuid)
return output
@app.route("/static/<string:group>/<string:filename>", methods=['GET'])
def static_content(group, filename):
try:
return send_from_directory("/app/static/{}".format(group), filename=filename)
except FileNotFoundError:
abort(404)
@app.route("/api/add", methods=['POST'])
def api_watch_add():
global messages
# @todo add_watch should throw a custom Exception for validation etc
new_uuid = datastore.add_watch(url=request.form.get('url').strip(), tag=request.form.get('tag').strip())
# Straight into the queue.
update_q.put(new_uuid)
messages.append({'class': 'ok', 'message': 'Watch added.'})
return redirect(url_for('main_page'))
@app.route("/api/delete", methods=['GET'])
def api_delete():
global messages
uuid = request.args.get('uuid')
datastore.delete(uuid)
messages.append({'class': 'ok', 'message': 'Deleted.'})
return redirect(url_for('main_page'))
@app.route("/api/checknow", methods=['GET'])
def api_watch_checknow():
global messages
tag = request.args.get('tag')
uuid = request.args.get('uuid')
i=0
if uuid:
update_q.put(uuid)
i = 1
elif tag != None:
for watch_uuid, watch in datastore.data['watching'].items():
if (tag != None and tag in watch['tag']):
i += 1
update_q.put(watch_uuid)
else:
# No tag, no uuid, add everything.
for watch_uuid, watch in datastore.data['watching'].items():
i += 1
update_q.put(watch_uuid)
messages.append({'class': 'ok', 'message': "{} watches are rechecking.".format(i)})
return redirect(url_for('main_page', tag=tag))
# Requests for checking on the site use a pool of thread Workers managed by a Queue.
class Worker(threading.Thread):
current_uuid = None
def __init__(self, q, *args, **kwargs):
self.q = q
super().__init__(*args, **kwargs)
def run(self):
import fetch_site_status
try:
while True:
uuid = self.q.get() # Blocking
self.current_uuid = uuid
if uuid in list(datastore.data['watching'].keys()):
update_handler = fetch_site_status.perform_site_check(uuid=uuid, datastore=datastore)
datastore.update_watch(uuid=uuid, update_obj=update_handler.update_data)
self.current_uuid = None # Done
self.q.task_done()
except KeyboardInterrupt:
return
# Thread runner to check every minute, look for new watches to feed into the Queue.
def ticker_thread_check_time_launch_checks():
# Spin up Workers.
for _ in range(datastore.data['settings']['requests']['workers']):
new_worker = Worker(update_q)
running_update_threads.append(new_worker)
new_worker.start()
# Every minute check for new UUIDs to follow up on
while True:
minutes = datastore.data['settings']['requests']['minutes_between_check']
for uuid, watch in datastore.data['watching'].items():
if watch['last_checked'] <= time.time() - (minutes * 60):
update_q.put(uuid)
time.sleep(60)
# Thread runner, this helps with thread/write issues when there are many operations that want to update the JSON
# by just running periodically in one thread, according to python, dict updates are threadsafe.
def save_datastore():
try:
while True:
if datastore.needs_write:
datastore.sync_to_json()
time.sleep(5)
except KeyboardInterrupt:
return
def main(argv):
ssl_mode = False
port = 5000
try:
opts, args = getopt.getopt(argv, "sp:", "purge")
except getopt.GetoptError:
print('backend.py -s SSL enable -p [port]')
sys.exit(2)
for opt, arg in opts:
if opt == '--purge':
# Remove history, the actual files you need to delete manually.
for uuid, watch in datastore.data['watching'].items():
watch.update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'previous_md5': None})
if opt == '-s':
ssl_mode = True
if opt == '-p':
port = arg
# @todo handle ctrl break
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
save_data_thread = threading.Thread(target=save_datastore).start()
# @todo finalise SSL config, but this should get you in the right direction if you need it.
if ssl_mode:
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', port)),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen(('', port)), app)
if __name__ == '__main__':
main(sys.argv[1:])

View File

@@ -1,16 +0,0 @@
FROM python:3.8-slim
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
# Should be mounted from docker-compose-development.yml
RUN pip3 install -r /requirements.txt
RUN [ ! -d "/datastore" ] && mkdir /datastore
COPY sleep.py /
CMD [ "python", "/sleep.py" ]

View File

@@ -1,9 +0,0 @@
import time
import sys
print ("Sleep loop, you should run your script from the console")
while True:
# Wait for 5 seconds
time.sleep(2)

View File

@@ -1,132 +0,0 @@
import time
import requests
import hashlib
import os
import re
from inscriptis import get_text
# Some common stuff here that can be moved to a base class
class perform_site_check():
# New state that is set after a check
# Return value dict
update_obj = {}
def __init__(self, *args, uuid=False, datastore, **kwargs):
super().__init__(*args, **kwargs)
self.timestamp = int(time.time()) # used for storage etc too
self.uuid = uuid
self.datastore = datastore
self.url = datastore.get_val(uuid, 'url')
self.current_md5 = datastore.get_val(uuid, 'previous_md5')
self.output_path = "/datastore/{}".format(self.uuid)
self.ensure_output_path()
self.run()
# Current state of what needs to be updated
@property
def update_data(self):
return self.update_obj
def save_firefox_screenshot(self, uuid, output):
# @todo call selenium or whatever
return
def ensure_output_path(self):
try:
os.stat(self.output_path)
except:
os.mkdir(self.output_path)
def save_response_html_output(self, output):
# @todo Saving the original HTML can be very large, better to set as an option, these files could be important to some.
with open("{}/{}.html".format(self.output_path, self.timestamp), 'w') as f:
f.write(output)
f.close()
def save_response_stripped_output(self, output):
fname = "{}/{}.stripped.txt".format(self.output_path, self.timestamp)
with open(fname, 'w') as f:
f.write(output)
f.close()
return fname
def run(self):
extra_headers = self.datastore.get_val(self.uuid, 'headers')
# Tweak the base config with the per-watch ones
request_headers = self.datastore.data['settings']['headers'].copy()
request_headers.update(extra_headers)
# https://github.com/psf/requests/issues/4525
# Requests doesnt yet support brotli encoding, so don't put 'br' here, be totally sure that the user cannot
# do this by accident.
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
try:
timeout = self.datastore.data['settings']['requests']['timeout']
except KeyError:
# @todo yeah this should go back to the default value in store.py, but this whole object should abstract off it
timeout = 15
try:
r = requests.get(self.url,
headers=request_headers,
timeout=timeout,
verify=False)
stripped_text_from_html = get_text(r.text)
# Usually from networkIO/requests level
except (requests.exceptions.ConnectionError, requests.exceptions.ReadTimeout) as e:
self.update_obj["last_error"] = str(e)
print(str(e))
except requests.exceptions.MissingSchema:
print("Skipping {} due to missing schema/bad url".format(self.uuid))
# Usually from html2text level
except UnicodeDecodeError as e:
self.update_obj["last_error"] = str(e)
print(str(e))
# figure out how to deal with this cleaner..
# 'utf-8' codec can't decode byte 0xe9 in position 480: invalid continuation byte
else:
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
self.update_obj["last_check_status"] = r.status_code
self.update_obj["last_error"] = False
fetched_md5 = hashlib.md5(stripped_text_from_html.encode('utf-8')).hexdigest()
if self.current_md5 != fetched_md5:
# Don't confuse people by updating as last-changed, when it actually just changed from None..
if self.datastore.get_val(self.uuid, 'previous_md5') is not None:
self.update_obj["last_changed"] = self.timestamp
self.update_obj["previous_md5"] = fetched_md5
self.save_response_html_output(r.text)
output_filepath = self.save_response_stripped_output(stripped_text_from_html)
# Update history with the stripped text for future reference, this will also mean we save the first
timestamp = str(self.timestamp)
self.update_obj.update({"history": {timestamp: output_filepath}})
self.update_obj["last_checked"] = self.timestamp

View File

@@ -1,14 +0,0 @@
from flask import make_response
from functools import wraps, update_wrapper
from datetime import datetime
def nocache(view):
@wraps(view)
def no_cache(*args, **kwargs):
response = make_response(view(*args, **kwargs))
response.headers['hmm'] = datetime.now()
return response
return update_wrapper(no_cache, view)

View File

@@ -1,66 +0,0 @@
table {
table-layout: fixed;
width: 100%;
}
td {
width: 33%;
padding: 3px 4px;
border: 1px solid transparent;
vertical-align: top;
font: 1em monospace;
text-align: left;
white-space: pre-wrap;
}
h1 {
display: inline;
font-size: 100%;
}
del {
text-decoration: none;
color: #b30000;
background: #fadad7;
}
ins {
background: #eaf2c2;
color: #406619;
text-decoration: none;
}
#result {
white-space: pre-wrap;
}
#settings {
background: rgba(0,0,0,.05);
padding: 1em;
border-radius: 10px;
margin-bottom: 1em;
color: #fff;
font-size: 80%;
}
#settings label {
margin-left: 1em;
display: inline-block;
font-weight: normal;
}
.source {
position: absolute;
right: 1%;
top: .2em;
}
@-moz-document url-prefix() {
body {
height: 99%; /* Hide scroll bar in Firefox */
}
}
#diff-ui {
background: #fff;
padding: 2em;
margin: 1em;
border-radius: 5px;
font-size: 9px;
}

View File

@@ -1,210 +0,0 @@
/*
* -- BASE STYLES --
* Most of these are inherited from Base, but I want to change a few.
*/
body {
color: #333;
background: #262626;
}
.pure-table-even {
background: #fff;
}
/* Some styles from https://css-tricks.com/ */
a {
text-decoration: none;
color: #1b98f8;
}
a.github-link {
color: #fff;
}
.pure-menu-horizontal {
background: #fff;
padding: 5px;
display: flex;
justify-content: space-between;
border-bottom: 2px solid #ed5900;
align-items: center;
}
section.content {
padding-top: 5em;
padding-bottom: 5em;
flex-direction: column;
display: flex;
align-items: center;
justify-content: center;
}
.pure-table.watch-table td {
font-size: 80%;
}
/* table related */
.watch-table {
width: 100%;
}
.watch-tag-list {
color: #e70069;
white-space: nowrap;
}
.box {
max-width: 80%;
flex-direction: column;
display: flex;
justify-content: center;
}
.watch-table .error {
color: #a00;
}
.watch-table td {
white-space: nowrap;
}
.watch-table td.title-col {
word-break: break-all;
white-space: normal;
}
.watch-table th {
white-space: nowrap;
}
.watch-table .title-col a[target="_blank"]::after, .current-diff-url::after {
content: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAQElEQVR42qXKwQkAIAxDUUdxtO6/RBQkQZvSi8I/pL4BoGw/XPkh4XigPmsUgh0626AjRsgxHTkUThsG2T/sIlzdTsp52kSS1wAAAABJRU5ErkJggg==);
margin: 0 3px 0 5px;
}
#check-all-button {
text-align:right;
}
#check-all-button a {
border-top-left-radius: initial;
border-top-right-radius: initial;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
}
body:after {
content: "";
background: linear-gradient(130deg, #ff7a18, #af002d 41.07%, #319197 76.05%)
}
body:after, body:before {
display: block;
height: 600px;
position: absolute;
top: 0;
left: 0;
width: 100%;
z-index: -1;
}
body::after {
opacity: 0.91;
}
body::before {
content: "";
background-image: url(/static/images/gradient-border.png);
}
body:before {
background-size: cover
}
body:after, body:before {
-webkit-clip-path: polygon(100% 0, 0 0, 0 77.5%, 1% 77.4%, 2% 77.1%, 3% 76.6%, 4% 75.9%, 5% 75.05%, 6% 74.05%, 7% 72.95%, 8% 71.75%, 9% 70.55%, 10% 69.3%, 11% 68.05%, 12% 66.9%, 13% 65.8%, 14% 64.8%, 15% 64%, 16% 63.35%, 17% 62.85%, 18% 62.6%, 19% 62.5%, 20% 62.65%, 21% 63%, 22% 63.5%, 23% 64.2%, 24% 65.1%, 25% 66.1%, 26% 67.2%, 27% 68.4%, 28% 69.65%, 29% 70.9%, 30% 72.15%, 31% 73.3%, 32% 74.35%, 33% 75.3%, 34% 76.1%, 35% 76.75%, 36% 77.2%, 37% 77.45%, 38% 77.5%, 39% 77.3%, 40% 76.95%, 41% 76.4%, 42% 75.65%, 43% 74.75%, 44% 73.75%, 45% 72.6%, 46% 71.4%, 47% 70.15%, 48% 68.9%, 49% 67.7%, 50% 66.55%, 51% 65.5%, 52% 64.55%, 53% 63.75%, 54% 63.15%, 55% 62.75%, 56% 62.55%, 57% 62.5%, 58% 62.7%, 59% 63.1%, 60% 63.7%, 61% 64.45%, 62% 65.4%, 63% 66.45%, 64% 67.6%, 65% 68.8%, 66% 70.05%, 67% 71.3%, 68% 72.5%, 69% 73.6%, 70% 74.65%, 71% 75.55%, 72% 76.35%, 73% 76.9%, 74% 77.3%, 75% 77.5%, 76% 77.45%, 77% 77.25%, 78% 76.8%, 79% 76.2%, 80% 75.4%, 81% 74.45%, 82% 73.4%, 83% 72.25%, 84% 71.05%, 85% 69.8%, 86% 68.55%, 87% 67.35%, 88% 66.2%, 89% 65.2%, 90% 64.3%, 91% 63.55%, 92% 63%, 93% 62.65%, 94% 62.5%, 95% 62.55%, 96% 62.8%, 97% 63.3%, 98% 63.9%, 99% 64.75%, 100% 65.7%);
clip-path: polygon(100% 0, 0 0, 0 77.5%, 1% 77.4%, 2% 77.1%, 3% 76.6%, 4% 75.9%, 5% 75.05%, 6% 74.05%, 7% 72.95%, 8% 71.75%, 9% 70.55%, 10% 69.3%, 11% 68.05%, 12% 66.9%, 13% 65.8%, 14% 64.8%, 15% 64%, 16% 63.35%, 17% 62.85%, 18% 62.6%, 19% 62.5%, 20% 62.65%, 21% 63%, 22% 63.5%, 23% 64.2%, 24% 65.1%, 25% 66.1%, 26% 67.2%, 27% 68.4%, 28% 69.65%, 29% 70.9%, 30% 72.15%, 31% 73.3%, 32% 74.35%, 33% 75.3%, 34% 76.1%, 35% 76.75%, 36% 77.2%, 37% 77.45%, 38% 77.5%, 39% 77.3%, 40% 76.95%, 41% 76.4%, 42% 75.65%, 43% 74.75%, 44% 73.75%, 45% 72.6%, 46% 71.4%, 47% 70.15%, 48% 68.9%, 49% 67.7%, 50% 66.55%, 51% 65.5%, 52% 64.55%, 53% 63.75%, 54% 63.15%, 55% 62.75%, 56% 62.55%, 57% 62.5%, 58% 62.7%, 59% 63.1%, 60% 63.7%, 61% 64.45%, 62% 65.4%, 63% 66.45%, 64% 67.6%, 65% 68.8%, 66% 70.05%, 67% 71.3%, 68% 72.5%, 69% 73.6%, 70% 74.65%, 71% 75.55%, 72% 76.35%, 73% 76.9%, 74% 77.3%, 75% 77.5%, 76% 77.45%, 77% 77.25%, 78% 76.8%, 79% 76.2%, 80% 75.4%, 81% 74.45%, 82% 73.4%, 83% 72.25%, 84% 71.05%, 85% 69.8%, 86% 68.55%, 87% 67.35%, 88% 66.2%, 89% 65.2%, 90% 64.3%, 91% 63.55%, 92% 63%, 93% 62.65%, 94% 62.5%, 95% 62.55%, 96% 62.8%, 97% 63.3%, 98% 63.9%, 99% 64.75%, 100% 65.7%)
}
.button-small {
font-size: 85%;
}
.fetch-error {
padding-top: 1em;
font-size: 60%;
max-width: 400px;
display: block;
}
.edit-form {
background: #fff;
padding: 2em;
margin: 1em;
border-radius: 5px;
}
.button-secondary {
color: white;
border-radius: 4px;
text-shadow: 0 1px 1px rgba(0, 0, 0, 0.2);
}
.button-success {
background: rgb(28, 184, 65);
/* this is a green */
}
.button-tag {
background: rgb(99, 99, 99);
color: #fff;
font-size: 65%;
border-bottom-left-radius: initial;
border-bottom-right-radius: initial;
}
.button-tag.active {
background: #9c9c9c;
font-weight: bold;
}
.button-error {
background: rgb(202, 60, 60);
/* this is a maroon */
}
.button-warning {
background: rgb(223, 117, 20);
/* this is an orange */
}
.button-secondary {
background: rgb(66, 184, 221);
/* this is a light blue */
}
.button-cancel {
background: rgb(200, 200, 200);
/* this is a green */
}
.messages {
padding: 1em;
background: rgba(255,255,255,.2);
border-radius: 10px;
color: #fff;
font-weight: bold;
}
.pure-form label {
font-weight: bold;
}
#new-watch-form {
background: rgba(0,0,0,.05);
padding: 1em;
border-radius: 10px;
margin-bottom: 1em;
}
#new-watch-form legend {
color: #fff;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,170 +0,0 @@
import json
import uuid as uuid_builder
import validators
import os.path
from os import path
from threading import Lock, Thread
# Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods?
# Open a github issue if you know something :)
# https://stackoverflow.com/questions/6190468/how-to-trigger-function-on-value-change
class ChangeDetectionStore:
lock = Lock()
def __init__(self):
self.needs_write = False
self.__data = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'tag': "0.23",
'settings': {
'headers': {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate', # No support for brolti in python requests yet.
'Accept-Language': 'en-GB,en-US;q=0.9,en;'
},
'requests': {
'timeout': 15, # Default 15 seconds
'minutes_between_check': 3 * 60, # Default 3 hours
'workers': 10 # Number of threads, lower is better for slow connections
}
}
}
# Base definition for all watchers
self.generic_definition = {
'url': None,
'tag': None,
'last_checked': 0,
'last_changed': 0,
'title': None,
'previous_md5': None,
'uuid': str(uuid_builder.uuid4()),
'headers': {}, # Extra headers to send
'history': {} # Dict of timestamp and output stripped filename
}
if path.isfile('/source.txt'):
with open('/source.txt') as f:
# Should be set in Dockerfile to look for /source.txt , this will give us the git commit #
# So when someone gives us a backup file to examine, we know exactly what code they were running.
self.__data['build_sha'] = f.read()
try:
with open('/datastore/url-watches.json') as json_file:
from_disk = json.load(json_file)
# @todo isnt there a way todo this dict.update recursively?
# Problem here is if the one on the disk is missing a sub-struct, it wont be present anymore.
if 'watching' in from_disk:
self.__data['watching'].update(from_disk['watching'])
if 'settings' in from_disk:
if 'headers' in from_disk['settings']:
self.__data['settings']['headers'].update(from_disk['settings']['headers'])
if 'requests' in from_disk['settings']:
self.__data['settings']['requests'].update(from_disk['settings']['requests'])
# Reinitialise each `watching` with our generic_definition in the case that we add a new var in the future.
# @todo pretty sure theres a python we todo this with an abstracted(?) object!
i = 0
for uuid, watch in self.data['watching'].items():
_blank = self.generic_definition.copy()
_blank.update(watch)
self.__data['watching'].update({uuid: _blank})
print("Watching:", uuid, _blank['url'])
# First time ran, doesnt exist.
except (FileNotFoundError, json.decoder.JSONDecodeError):
print("Creating JSON store")
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
self.add_watch(url='https://www.gov.uk/coronavirus', tag='Covid')
self.add_watch(url='https://changedetection.io', tag='Tech news')
def update_watch(self, uuid, update_obj):
self.lock.acquire()
# In python 3.9 we have the |= dict operator, but that still will lose data on nested structures...
for dict_key, d in self.generic_definition.items():
if isinstance(d, dict) and dict_key in update_obj:
self.__data['watching'][uuid][dict_key].update(update_obj[dict_key])
del(update_obj[dict_key])
# Update with the remaining values
self.__data['watching'][uuid].update(update_obj)
self.needs_write = True
self.lock.release()
@property
def data(self):
return self.__data
def get_all_tags(self):
tags = []
for uuid, watch in self.data['watching'].items():
# Support for comma separated list of tags.
for tag in watch['tag'].split(','):
tag = tag.strip()
if not tag in tags:
tags.append(tag)
tags.sort()
return tags
def delete(self, uuid):
self.lock.acquire()
del (self.__data['watching'][uuid])
self.needs_write = True
self.lock.release()
def url_exists(self, url):
# Probably their should be dict...
for watch in self.data['watching']:
if watch['url'] == url:
return True
return False
def get_val(self, uuid, val):
# Probably their should be dict...
return self.data['watching'][uuid].get(val)
def add_watch(self, url, tag):
self.lock.acquire()
print("Adding", url, tag)
# # @todo deal with exception
# validators.url(url)
# @todo use a common generic version of this
new_uuid = str(uuid_builder.uuid4())
_blank = self.generic_definition.copy()
_blank.update({
'url': url,
'tag': tag,
'uuid': new_uuid
})
self.data['watching'][new_uuid] = _blank
self.needs_write = True
self.lock.release()
return new_uuid
def sync_to_json(self):
print("Saving index")
self.lock.acquire()
with open('/datastore/url-watches.json', 'w') as json_file:
json.dump(self.data, json_file, indent=4)
self.needs_write = False
self.lock.release()
# body of the constructor

View File

@@ -1,70 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Self hosted website change detection.">
<title>Change Detection</title>
<link rel="stylesheet" href="/static/css/pure-min.css">
<link rel="stylesheet" href="/static/css/styles.css?ver=1000">
{% if extra_stylesheets %}
{% for m in extra_stylesheets %}
<link rel="stylesheet" href="{{ m }}?ver=1000">
{% endfor %}
{% endif %}
</head>
<body>
<div class="header">
<div class="home-menu pure-menu pure-menu-horizontal pure-menu-fixed">
<a class="pure-menu-heading" href="/"><strong>Change</strong>Detection.io</a>
{% if current_diff_url %}
<a class=current-diff-url href="{{ current_diff_url }}"><span style="max-width: 30%; overflow: hidden;">{{ current_diff_url }}</a>
{% endif %}
<ul class="pure-menu-list">
<li class="pure-menu-item">
<a href="/backup" class="pure-menu-link">BACKUP</a>
</li>
<li class="pure-menu-item">
<a href="/import" class="pure-menu-link">IMPORT</a>
</li>
<li class="pure-menu-item">
<a href="/settings" class="pure-menu-link">SETTINGS</a>
</li>
<li class="pure-menu-item"><a class="github-link" href="https://github.com/dgtlmoon/changedetection.io">
<svg class="octicon octicon-mark-github v-align-middle" height="32" viewBox="0 0 16 16" version="1.1"
width="32" aria-hidden="true">
<path fill-rule="evenodd"
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"></path>
</svg>
</a></li>
<!--
<li class="pure-menu-item"><a href="#" class="pure-menu-link">Tour</a></li>
<li class="pure-menu-item"><a href="#" class="pure-menu-link">Sign Up</a></li>
-->
</ul>
</div>
</div>
<section class="content">
<header>
{% block header %}{% endblock %}
</header>
{% if messages %}
<div class="messages">
{% for message in messages %}
<div class="flash-message {{ message['class'] }}">{{ message['message'] }}</div>
{% endfor %}
</div>
{% endif %}
{% block content %}
{% endblock %}
</section>
</body>
</html>

View File

@@ -1,140 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div id="settings">
<h1>Differences</h1>
<form class="pure-form " action="" method="GET">
<fieldset>
<label for="diffWords" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffWords" value="diffWords" /> Words</label>
<label for="diffLines" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffLines" value="diffLines" checked=""/> Lines</label>
<label for="diffChars" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffChars" value="diffChars"/> Chars</label>
{% if versions|length >= 1 %}
<label for="diff-version">Compare newest (<span id="current-v-date"></span>) with</label>
<select id="diff-version" name="previous_version">
{% for version in versions %}
<option value="{{version}}" {% if version== current_previous_version %} selected="" {% endif %}>
{{version}}
</option>
{% endfor %}
</select>
<button type="submit" class="pure-button pure-button-primary">Go</button>
{% endif %}
</fieldset>
</form>
<del>Removed text</del>
<ins>Inserted Text</ins>
</div>
<div id="diff-ui">
<table>
<tbody>
<tr>
<!-- just proof of concept copied straight from github.com/kpdecker/jsdiff -->
<td id="a" style="display: none;">{{previous}}</td>
<td id="b" style="display: none;">{{newest}}</td>
<td>
<span id="result"></span>
</td>
</tr>
</tbody>
</table>
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
</div>
<script src="/static/js/diff.js"></script>
<script defer="">
var a = document.getElementById('a');
var b = document.getElementById('b');
var result = document.getElementById('result');
function changed() {
var diff = JsDiff[window.diffType](a.textContent, b.textContent);
var fragment = document.createDocumentFragment();
for (var i=0; i < diff.length; i++) {
if (diff[i].added && diff[i + 1] && diff[i + 1].removed) {
var swap = diff[i];
diff[i] = diff[i + 1];
diff[i + 1] = swap;
}
var node;
if (diff[i].removed) {
node = document.createElement('del');
node.appendChild(document.createTextNode(diff[i].value));
} else if (diff[i].added) {
node = document.createElement('ins');
node.appendChild(document.createTextNode(diff[i].value));
} else {
node = document.createTextNode(diff[i].value);
}
fragment.appendChild(node);
}
result.textContent = '';
result.appendChild(fragment);
}
window.onload = function() {
/* Convert what is options from UTC time.time() to local browser time */
var diffList=document.getElementById("diff-version");
if (typeof(diffList) != 'undefined' && diffList != null) {
for (var option of diffList.options) {
var dateObject = new Date(option.value*1000);
option.label=dateObject.toLocaleString();
}
}
/* Set current version date as local time in the browser also */
var current_v = document.getElementById("current-v-date");
var dateObject = new Date({{ newest_version_timestamp }}*1000);
current_v.innerHTML=dateObject.toLocaleString();
onDiffTypeChange(document.querySelector('#settings [name="diff_type"]:checked'));
changed();
};
a.onpaste = a.onchange =
b.onpaste = b.onchange = changed;
if ('oninput' in a) {
a.oninput = b.oninput = changed;
} else {
a.onkeyup = b.onkeyup = changed;
}
function onDiffTypeChange(radio) {
window.diffType = radio.value;
document.title = "Diff " + radio.value.slice(4);
}
var radio = document.getElementsByName('diff_type');
for (var i = 0; i < radio.length; i++) {
radio[i].onchange = function(e) {
onDiffTypeChange(e.target);
changed();
}
}
</script>
{% endblock %}

View File

@@ -1,55 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/edit?uuid={{uuid}}" method="POST">
<fieldset>
<div class="pure-control-group">
<label for="url">URL</label>
<input type="url" id="url" required="" placeholder="https://..." name="url" value="{{ watch.url}}"
size="50"/>
<span class="pure-form-message-inline">This is a required field.</span>
</div>
<div class="pure-control-group">
<label for="tag">Tag</label>
<input type="text" placeholder="tag" size="10" id="tag" name="tag" value="{{ watch.tag}}"/>
<span class="pure-form-message-inline">Grouping tags, can be a comma separated list.</span>
</div>
<fieldset class="pure-group">
<label for="headers">Extra request headers</label>
<textarea id=headers name="headers" class="pure-input-1-2" placeholder="Example
Cookie: foobar
User-Agent: wonderbra 1.0"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="5">{% for key, value in watch.headers.items() %}{{ key }}: {{ value }}
{% endfor %}</textarea>
<br/>
</fieldset>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Cancel</a>
<a href="/api/delete?uuid={{uuid}}"
class="pure-button button-small button-error ">Delete</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,26 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-aligned" action="/import" method="POST">
<fieldset class="pure-group">
<legend>One URL per line, URLs that do not pass validation will stay in the textarea.</legend>
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="25">{{ remaining }}</textarea>
</fieldset>
<button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button>
</form>
</div>
{% endblock %}

View File

@@ -1,43 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/scrub" method="POST">
<fieldset>
<div class="pure-control-group">
This will remove all version snapshots/data, but keep your list of URLs. <br/>
You may like to use the <strong>BACKUP</strong> link first.<br/>
Type in the word <strong>scrub</strong> to confirm that you understand!
<br/>
</div>
<div class="pure-control-group">
<br/>
<label for="confirmtext">Confirm</label><br/>
<input type="text" id="confirmtext" required="" name="confirmtext" value="" size="10"/>
<br/>
</div>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Scrub!</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Cancel</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,35 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/settings" method="POST">
<fieldset>
<div class="pure-control-group">
<label for="minutes">Maximum time in minutes until recheck.</label>
<input type="text" id="minutes" required="" name="minutes" value="{{minutes}}"
size="5"/>
<span class="pure-form-message-inline">This is a required field.</span>
</div>
<br/>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Back</a>
<a href="/scrub" class="pure-button button-small button-cancel">Reset all version data</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,75 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="box">
<form class="pure-form" action="/api/add" method="POST" id="new-watch-form">
<fieldset>
<legend>Add a new change detection watch</legend>
<input type="url" placeholder="https://..." name="url"/>
<input type="text" placeholder="tag" size="10" name="tag" value="{{active_tag if active_tag}}"/>
<button type="submit" class="pure-button pure-button-primary">Watch</button>
</fieldset>
<!-- add extra stuff, like do a http POST and send headers -->
<!-- user/pass r = requests.get('https://api.github.com/user', auth=('user', 'pass')) -->
</form>
<div>
{% for tag in tags %}
{% if tag == "" %}
<a href="/" class="pure-button button-tag {{'active' if active_tag == tag }}">All</a>
{% else %}
<a href="/?tag={{ tag}}" class="pure-button button-tag {{'active' if active_tag == tag }}">{{ tag }}</a>
{% endif %}
{% endfor %}
</div>
<div id="watch-table-wrapper">
<table class="pure-table pure-table-striped watch-table">
<thead>
<tr>
<th>#</th>
<th></th>
<th>Last Checked</th>
<th>Last Changed</th>
<th></th>
</tr>
</thead>
<tbody>
{% for watch in watches %}
<tr id="{{ watch.uuid }}"
class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }} {% if watch.last_error is defined and watch.last_error != False %}error{% endif %}">
<td>{{ loop.index }}</td>
<td class="title-col">{{watch.title if watch.title is not none else watch.url}}
<a class="external" target=_blank href="{{ watch.url }}"></a>
{% if watch.last_error is defined and watch.last_error != False %}
<div class="fetch-error">{{ watch.last_error }}</div>
{% endif %}
{% if not active_tag %}
<span class="watch-tag-list">{{ watch.tag}}</span>
{% endif %}
</td>
<td>{{watch|format_last_checked_time}}</td>
<td>{{watch.last_changed|format_timestamp_timeago}}</td>
<td><a href="/api/checknow?uuid={{ watch.uuid}}{% if request.args.get('tag') %}&tag={{request.args.get('tag')}}{% endif %}" class="pure-button button-small pure-button-primary">Recheck</a>
<a href="/edit?uuid={{ watch.uuid}}" class="pure-button button-small pure-button-primary">Edit</a>
{% if watch.history|length >= 2 %}
<a href="/diff/{{ watch.uuid}}" class="pure-button button-small pure-button-primary">Diff</a>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
<div id="check-all-button">
<a href="/api/checknow{% if active_tag%}?tag={{active_tag}}{%endif%}" class="pure-button button-tag " >Recheck all {% if active_tag%}in "{{active_tag}}"{%endif%}</a>
</div>
</div>
</div>
{% endblock %}

6
changedetection.py Executable file
View File

@@ -0,0 +1,6 @@
#!/usr/bin/env python3
# Only exists for direct CLI usage
import changedetectionio
changedetectionio.main()

2
changedetectionio/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
test-datastore
package-lock.json

View File

@@ -0,0 +1,197 @@
#!/usr/bin/env python3
# Read more https://github.com/dgtlmoon/changedetection.io/wiki
__version__ = '0.49.0'
from changedetectionio.strtobool import strtobool
from json.decoder import JSONDecodeError
import os
os.environ['EVENTLET_NO_GREENDNS'] = 'yes'
import eventlet
import eventlet.wsgi
import getopt
import signal
import socket
import sys
from changedetectionio import store
from changedetectionio.flask_app import changedetection_app
from loguru import logger
# Only global so we can access it in the signal handler
app = None
datastore = None
def get_version():
return __version__
# Parent wrapper or OS sends us a SIGTERM/SIGINT, do everything required for a clean shutdown
def sigshutdown_handler(_signo, _stack_frame):
global app
global datastore
name = signal.Signals(_signo).name
logger.critical(f'Shutdown: Got Signal - {name} ({_signo}), Saving DB to disk and calling shutdown')
datastore.sync_to_json()
logger.success('Sync JSON to disk complete.')
# This will throw a SystemExit exception, because eventlet.wsgi.server doesn't know how to deal with it.
# Solution: move to gevent or other server in the future (#2014)
datastore.stop_thread = True
app.config.exit.set()
sys.exit()
def main():
global datastore
global app
datastore_path = None
do_cleanup = False
host = ''
ipv6_enabled = False
port = os.environ.get('PORT') or 5000
ssl_mode = False
# On Windows, create and use a default path.
if os.name == 'nt':
datastore_path = os.path.expandvars(r'%APPDATA%\changedetection.io')
os.makedirs(datastore_path, exist_ok=True)
else:
# Must be absolute so that send_from_directory doesnt try to make it relative to backend/
datastore_path = os.path.join(os.getcwd(), "../datastore")
try:
opts, args = getopt.getopt(sys.argv[1:], "6Ccsd:h:p:l:", "port")
except getopt.GetoptError:
print('backend.py -s SSL enable -h [host] -p [port] -d [datastore path] -l [debug level - TRACE, DEBUG(default), INFO, SUCCESS, WARNING, ERROR, CRITICAL]')
sys.exit(2)
create_datastore_dir = False
# Set a default logger level
logger_level = 'DEBUG'
# Set a logger level via shell env variable
# Used: Dockerfile for CICD
# To set logger level for pytest, see the app function in tests/conftest.py
if os.getenv("LOGGER_LEVEL"):
level = os.getenv("LOGGER_LEVEL")
logger_level = int(level) if level.isdigit() else level.upper()
for opt, arg in opts:
if opt == '-s':
ssl_mode = True
if opt == '-h':
host = arg
if opt == '-p':
port = int(arg)
if opt == '-d':
datastore_path = arg
if opt == '-6':
logger.success("Enabling IPv6 listen support")
ipv6_enabled = True
# Cleanup (remove text files that arent in the index)
if opt == '-c':
do_cleanup = True
# Create the datadir if it doesnt exist
if opt == '-C':
create_datastore_dir = True
if opt == '-l':
logger_level = int(arg) if arg.isdigit() else arg.upper()
# Without this, a logger will be duplicated
logger.remove()
try:
log_level_for_stdout = { 'DEBUG', 'SUCCESS' }
logger.configure(handlers=[
{"sink": sys.stdout, "level": logger_level,
"filter" : lambda record: record['level'].name in log_level_for_stdout},
{"sink": sys.stderr, "level": logger_level,
"filter": lambda record: record['level'].name not in log_level_for_stdout},
])
# Catch negative number or wrong log level name
except ValueError:
print("Available log level names: TRACE, DEBUG(default), INFO, SUCCESS,"
" WARNING, ERROR, CRITICAL")
sys.exit(2)
# isnt there some @thingy to attach to each route to tell it, that this route needs a datastore
app_config = {'datastore_path': datastore_path}
if not os.path.isdir(app_config['datastore_path']):
if create_datastore_dir:
os.mkdir(app_config['datastore_path'])
else:
logger.critical(
f"ERROR: Directory path for the datastore '{app_config['datastore_path']}'"
f" does not exist, cannot start, please make sure the"
f" directory exists or specify a directory with the -d option.\n"
f"Or use the -C parameter to create the directory.")
sys.exit(2)
try:
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=__version__)
except JSONDecodeError as e:
# Dont' start if the JSON DB looks corrupt
logger.critical(f"ERROR: JSON DB or Proxy List JSON at '{app_config['datastore_path']}' appears to be corrupt, aborting.")
logger.critical(str(e))
return
app = changedetection_app(app_config, datastore)
signal.signal(signal.SIGTERM, sigshutdown_handler)
signal.signal(signal.SIGINT, sigshutdown_handler)
# Go into cleanup mode
if do_cleanup:
datastore.remove_unused_snapshots()
app.config['datastore_path'] = datastore_path
@app.context_processor
def inject_version():
return dict(right_sticky="v{}".format(datastore.data['version_tag']),
new_version_available=app.config['NEW_VERSION_AVAILABLE'],
has_password=datastore.data['settings']['application']['password'] != False
)
# Monitored websites will not receive a Referer header when a user clicks on an outgoing link.
@app.after_request
def hide_referrer(response):
if strtobool(os.getenv("HIDE_REFERER", 'false')):
response.headers["Referrer-Policy"] = "same-origin"
return response
# Proxy sub-directory support
# Set environment var USE_X_SETTINGS=1 on this script
# And then in your proxy_pass settings
#
# proxy_set_header Host "localhost";
# proxy_set_header X-Forwarded-Prefix /app;
if os.getenv('USE_X_SETTINGS'):
logger.info("USE_X_SETTINGS is ENABLED")
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1)
s_type = socket.AF_INET6 if ipv6_enabled else socket.AF_INET
if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((host, port), s_type),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen((host, int(port)), s_type), app)

View File

View File

@@ -0,0 +1,117 @@
# Responsible for building the storage dict into a set of rules ("JSON Schema") acceptable via the API
# Probably other ways to solve this when the backend switches to some ORM
def build_time_between_check_json_schema():
# Setup time between check schema
schema_properties_time_between_check = {
"type": "object",
"additionalProperties": False,
"properties": {}
}
for p in ['weeks', 'days', 'hours', 'minutes', 'seconds']:
schema_properties_time_between_check['properties'][p] = {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
}
return schema_properties_time_between_check
def build_watch_json_schema(d):
# Base JSON schema
schema = {
'type': 'object',
'properties': {},
}
for k, v in d.items():
# @todo 'integer' is not covered here because its almost always for internal usage
if isinstance(v, type(None)):
schema['properties'][k] = {
"anyOf": [
{"type": "null"},
]
}
elif isinstance(v, list):
schema['properties'][k] = {
"anyOf": [
{"type": "array",
# Always is an array of strings, like text or regex or something
"items": {
"type": "string",
"maxLength": 5000
}
},
]
}
elif isinstance(v, bool):
schema['properties'][k] = {
"anyOf": [
{"type": "boolean"},
]
}
elif isinstance(v, str):
schema['properties'][k] = {
"anyOf": [
{"type": "string",
"maxLength": 5000},
]
}
# Can also be a string (or None by default above)
for v in ['body',
'notification_body',
'notification_format',
'notification_title',
'proxy',
'tag',
'title',
'webdriver_js_execute_code'
]:
schema['properties'][v]['anyOf'].append({'type': 'string', "maxLength": 5000})
# None or Boolean
schema['properties']['track_ldjson_price_data']['anyOf'].append({'type': 'boolean'})
schema['properties']['method'] = {"type": "string",
"enum": ["GET", "POST", "DELETE", "PUT"]
}
schema['properties']['fetch_backend']['anyOf'].append({"type": "string",
"enum": ["html_requests", "html_webdriver"]
})
# All headers must be key/value type dict
schema['properties']['headers'] = {
"type": "object",
"patternProperties": {
# Should always be a string:string type value
".*": {"type": "string"},
}
}
from changedetectionio.notification import valid_notification_formats
schema['properties']['notification_format'] = {'type': 'string',
'enum': list(valid_notification_formats.keys())
}
# Stuff that shouldn't be available but is just state-storage
for v in ['previous_md5', 'last_error', 'has_ldjson_price_data', 'previous_md5_before_filters', 'uuid']:
del schema['properties'][v]
schema['properties']['webdriver_delay']['anyOf'].append({'type': 'integer'})
schema['properties']['time_between_check'] = build_time_between_check_json_schema()
# headers ?
return schema

View File

@@ -0,0 +1,416 @@
import os
from changedetectionio.strtobool import strtobool
from flask_expects_json import expects_json
from changedetectionio import queuedWatchMetaData
from flask_restful import abort, Resource
from flask import request, make_response
import validators
from . import auth
import copy
# See docs/README.md for rebuilding the docs/apidoc information
from . import api_schema
from ..model import watch_base
# Build a JSON Schema atleast partially based on our Watch model
watch_base_config = watch_base()
schema = api_schema.build_watch_json_schema(watch_base_config)
schema_create_watch = copy.deepcopy(schema)
schema_create_watch['required'] = ['url']
schema_update_watch = copy.deepcopy(schema)
schema_update_watch['additionalProperties'] = False
class Watch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
# Get information about a single watch, excluding the history list (can be large)
# curl http://localhost:5000/api/v1/watch/<string:uuid>
# @todo - version2 - ?muted and ?paused should be able to be called together, return the watch struct not "OK"
# ?recheck=true
@auth.check_token
def get(self, uuid):
"""
@api {get} /api/v1/watch/:uuid Single watch - get data, recheck, pause, mute.
@apiDescription Retrieve watch information and set muted/paused status
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?muted=unmuted" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?paused=unpaused" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiName Watch
@apiGroup Watch
@apiParam {uuid} uuid Watch unique ID.
@apiQuery {Boolean} [recheck] Recheck this watch `recheck=1`
@apiQuery {String} [paused] =`paused` or =`unpaused` , Sets the PAUSED state
@apiQuery {String} [muted] =`muted` or =`unmuted` , Sets the MUTE NOTIFICATIONS state
@apiSuccess (200) {String} OK When paused/muted/recheck operation OR full JSON object of the watch
@apiSuccess (200) {JSON} WatchJSON JSON Full JSON object of the watch
"""
from copy import deepcopy
watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.args.get('recheck'):
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid}))
return "OK", 200
if request.args.get('paused', '') == 'paused':
self.datastore.data['watching'].get(uuid).pause()
return "OK", 200
elif request.args.get('paused', '') == 'unpaused':
self.datastore.data['watching'].get(uuid).unpause()
return "OK", 200
if request.args.get('muted', '') == 'muted':
self.datastore.data['watching'].get(uuid).mute()
return "OK", 200
elif request.args.get('muted', '') == 'unmuted':
self.datastore.data['watching'].get(uuid).unmute()
return "OK", 200
# Return without history, get that via another API call
# Properties are not returned as a JSON, so add the required props manually
watch['history_n'] = watch.history_n
# attr .last_changed will check for the last written text snapshot on change
watch['last_changed'] = watch.last_changed
watch['viewed'] = watch.viewed
return watch
@auth.check_token
def delete(self, uuid):
"""
@api {delete} /api/v1/watch/:uuid Delete a watch and related history
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X DELETE -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiParam {uuid} uuid Watch unique ID.
@apiName Delete
@apiGroup Watch
@apiSuccess (200) {String} OK Was deleted
"""
if not self.datastore.data['watching'].get(uuid):
abort(400, message='No watch exists with the UUID of {}'.format(uuid))
self.datastore.delete(uuid)
return 'OK', 204
@auth.check_token
@expects_json(schema_update_watch)
def put(self, uuid):
"""
@api {put} /api/v1/watch/:uuid Update watch information
@apiExample {curl} Example usage:
Update (PUT)
curl http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X PUT -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "new list"}'
@apiDescription Updates an existing watch using JSON, accepts the same structure as returned in <a href="#api-Watch-Watch">get single watch information</a>
@apiParam {uuid} uuid Watch unique ID.
@apiName Update a watch
@apiGroup Watch
@apiSuccess (200) {String} OK Was updated
@apiSuccess (500) {String} ERR Some other error
"""
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.json.get('proxy'):
plist = self.datastore.proxy_list
if not request.json.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
watch.update(request.json)
return "OK", 200
class WatchHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
# Get a list of available history for a watch by UUID
# curl http://localhost:5000/api/v1/watch/<string:uuid>/history
@auth.check_token
def get(self, uuid):
"""
@api {get} /api/v1/watch/<string:uuid>/history Get a list of all historical snapshots available for a watch
@apiDescription Requires `uuid`, returns list
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091/history -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json"
{
"1676649279": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/cb7e9be8258368262246910e6a2a4c30.txt",
"1677092785": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/e20db368d6fc633e34f559ff67bb4044.txt",
"1677103794": "/tmp/data/6a4b7d5c-fee4-4616-9f43-4ac97046b595/02efdd37dacdae96554a8cc85dc9c945.txt"
}
@apiName Get list of available stored snapshots for watch
@apiGroup Watch History
@apiSuccess (200) {String} OK
@apiSuccess (404) {String} ERR Not found
"""
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
return watch.history, 200
class WatchSingleHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
@auth.check_token
def get(self, uuid, timestamp):
"""
@api {get} /api/v1/watch/<string:uuid>/history/<int:timestamp> Get single snapshot from watch
@apiDescription Requires watch `uuid` and `timestamp`. `timestamp` of "`latest`" for latest available snapshot, or <a href="#api-Watch_History-Get_list_of_available_stored_snapshots_for_watch">use the list returned here</a>
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091/history/1677092977 -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json"
@apiName Get single snapshot content
@apiGroup Watch History
@apiParam {String} [html] Optional Set to =1 to return the last HTML (only stores last 2 snapshots, use `latest` as timestamp)
@apiSuccess (200) {String} OK
@apiSuccess (404) {String} ERR Not found
"""
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message=f"No watch exists with the UUID of {uuid}")
if not len(watch.history):
abort(404, message=f"Watch found but no history exists for the UUID {uuid}")
if timestamp == 'latest':
timestamp = list(watch.history.keys())[-1]
if request.args.get('html'):
content = watch.get_fetched_html(timestamp)
if content:
response = make_response(content, 200)
response.mimetype = "text/html"
else:
response = make_response("No content found", 404)
response.mimetype = "text/plain"
else:
content = watch.get_history_snapshot(timestamp)
response = make_response(content, 200)
response.mimetype = "text/plain"
return response
class CreateWatch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
@auth.check_token
@expects_json(schema_create_watch)
def post(self):
"""
@api {post} /api/v1/watch Create a single watch
@apiDescription Requires atleast `url` set, can accept the same structure as <a href="#api-Watch-Watch">get single watch information</a> to create.
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "nice list"}'
@apiName Create
@apiGroup Watch
@apiSuccess (200) {String} OK Was created
@apiSuccess (500) {String} ERR Some other error
"""
json_data = request.get_json()
url = json_data['url'].strip()
# If hosts that only contain alphanumerics are allowed ("localhost" for example)
allow_simplehost = not strtobool(os.getenv('BLOCK_SIMPLEHOSTS', 'False'))
if not validators.url(url, simple_host=allow_simplehost):
return "Invalid or unsupported URL", 400
if json_data.get('proxy'):
plist = self.datastore.proxy_list
if not json_data.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
extras = copy.deepcopy(json_data)
# Because we renamed 'tag' to 'tags' but don't want to change the API (can do this in v2 of the API)
tags = None
if extras.get('tag'):
tags = extras.get('tag')
del extras['tag']
del extras['url']
new_uuid = self.datastore.add_watch(url=url, extras=extras, tag=tags)
if new_uuid:
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': new_uuid}))
return {'uuid': new_uuid}, 201
else:
return "Invalid or unsupported URL", 400
@auth.check_token
def get(self):
"""
@api {get} /api/v1/watch List watches
@apiDescription Return concise list of available watches and some very basic info
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45"
{
"6a4b7d5c-fee4-4616-9f43-4ac97046b595": {
"last_changed": 1677103794,
"last_checked": 1677103794,
"last_error": false,
"title": "",
"url": "http://www.quotationspage.com/random.php"
},
"e6f5fd5c-dbfe-468b-b8f3-f9d6ff5ad69b": {
"last_changed": 0,
"last_checked": 1676662819,
"last_error": false,
"title": "QuickLook",
"url": "https://github.com/QL-Win/QuickLook/tags"
}
}
@apiParam {String} [recheck_all] Optional Set to =1 to force recheck of all watches
@apiParam {String} [tag] Optional name of tag to limit results
@apiName ListWatches
@apiGroup Watch Management
@apiSuccess (200) {String} OK JSON dict
"""
list = {}
tag_limit = request.args.get('tag', '').lower()
for uuid, watch in self.datastore.data['watching'].items():
# Watch tags by name (replace the other calls?)
tags = self.datastore.get_all_tags_for_watch(uuid=uuid)
if tag_limit and not any(v.get('title').lower() == tag_limit for k, v in tags.items()):
continue
list[uuid] = {
'last_changed': watch.last_changed,
'last_checked': watch['last_checked'],
'last_error': watch['last_error'],
'title': watch['title'],
'url': watch['url'],
'viewed': watch.viewed
}
if request.args.get('recheck_all'):
for uuid in self.datastore.data['watching'].keys():
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid}))
return {'status': "OK"}, 200
return list, 200
class Import(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
@auth.check_token
def post(self):
"""
@api {post} /api/v1/import Import a list of watched URLs
@apiDescription Accepts a line-feed separated list of URLs to import, additionally with ?tag_uuids=(tag id), ?tag=(name), ?proxy={key}, ?dedupe=true (default true) one URL per line.
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/import --data-binary @list-of-sites.txt -H"x-api-key:8a111a21bc2f8f1dd9b9353bbd46049a"
@apiName Import
@apiGroup Watch
@apiSuccess (200) {List} OK List of watch UUIDs added
@apiSuccess (500) {String} ERR Some other error
"""
extras = {}
if request.args.get('proxy'):
plist = self.datastore.proxy_list
if not request.args.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
else:
extras['proxy'] = request.args.get('proxy')
dedupe = strtobool(request.args.get('dedupe', 'true'))
tags = request.args.get('tag')
tag_uuids = request.args.get('tag_uuids')
if tag_uuids:
tag_uuids = tag_uuids.split(',')
urls = request.get_data().decode('utf8').splitlines()
added = []
allow_simplehost = not strtobool(os.getenv('BLOCK_SIMPLEHOSTS', 'False'))
for url in urls:
url = url.strip()
if not len(url):
continue
# If hosts that only contain alphanumerics are allowed ("localhost" for example)
if not validators.url(url, simple_host=allow_simplehost):
return f"Invalid or unsupported URL - {url}", 400
if dedupe and self.datastore.url_exists(url):
continue
new_uuid = self.datastore.add_watch(url=url, extras=extras, tag=tags, tag_uuids=tag_uuids)
added.append(new_uuid)
return added
class SystemInfo(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
@auth.check_token
def get(self):
"""
@api {get} /api/v1/systeminfo Return system info
@apiDescription Return some info about the current system state
@apiExample {curl} Example usage:
curl http://localhost:5000/api/v1/systeminfo -H"x-api-key:813031b16330fe25e3780cf0325daa45"
HTTP/1.0 200
{
'queue_size': 10 ,
'overdue_watches': ["watch-uuid-list"],
'uptime': 38344.55,
'watch_count': 800,
'version': "0.40.1"
}
@apiName Get Info
@apiGroup System Information
"""
import time
overdue_watches = []
# Check all watches and report which have not been checked but should have been
for uuid, watch in self.datastore.data.get('watching', {}).items():
# see if now - last_checked is greater than the time that should have been
# this is not super accurate (maybe they just edited it) but better than nothing
t = watch.threshold_seconds()
if not t:
# Use the system wide default
t = self.datastore.threshold_seconds
time_since_check = time.time() - watch.get('last_checked')
# Allow 5 minutes of grace time before we decide it's overdue
if time_since_check - (5 * 60) > t:
overdue_watches.append(uuid)
from changedetectionio import __version__ as main_version
return {
'queue_size': self.update_q.qsize(),
'overdue_watches': overdue_watches,
'uptime': round(time.time() - self.datastore.start_time, 2),
'watch_count': len(self.datastore.data.get('watching', {})),
'version': main_version
}, 200

View File

@@ -0,0 +1,33 @@
from flask import request, make_response, jsonify
from functools import wraps
# Simple API auth key comparison
# @todo - Maybe short lived token in the future?
def check_token(f):
@wraps(f)
def decorated(*args, **kwargs):
datastore = args[0].datastore
config_api_token_enabled = datastore.data['settings']['application'].get('api_access_token_enabled')
if not config_api_token_enabled:
return
try:
api_key_header = request.headers['x-api-key']
except KeyError:
return make_response(
jsonify("No authorization x-api-key header."), 403
)
config_api_token = datastore.data['settings']['application'].get('api_access_token')
if api_key_header != config_api_token:
return make_response(
jsonify("Invalid access - API key invalid."), 403
)
return f(*args, **kwargs)
return decorated

View File

@@ -0,0 +1,12 @@
from changedetectionio import apprise_plugin
import apprise
# Create our AppriseAsset and populate it with some of our new values:
# https://github.com/caronc/apprise/wiki/Development_API#the-apprise-asset-object
asset = apprise.AppriseAsset(
image_url_logo='https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
)
asset.app_id = "changedetection.io"
asset.app_desc = "ChangeDetection.io best and simplest website monitoring and change detection"
asset.app_url = "https://changedetection.io"

View File

@@ -0,0 +1,98 @@
# include the decorator
from apprise.decorators import notify
from loguru import logger
from requests.structures import CaseInsensitiveDict
@notify(on="delete")
@notify(on="deletes")
@notify(on="get")
@notify(on="gets")
@notify(on="post")
@notify(on="posts")
@notify(on="put")
@notify(on="puts")
def apprise_custom_api_call_wrapper(body, title, notify_type, *args, **kwargs):
import requests
import json
import re
from urllib.parse import unquote_plus
from apprise.utils.parse import parse_url as apprise_parse_url
url = kwargs['meta'].get('url')
schema = kwargs['meta'].get('schema').lower().strip()
# Choose POST, GET etc from requests
method = re.sub(rf's$', '', schema)
requests_method = getattr(requests, method)
params = CaseInsensitiveDict({}) # Added to requests
auth = None
has_error = False
# Convert /foobar?+some-header=hello to proper header dictionary
results = apprise_parse_url(url)
# Add our headers that the user can potentially over-ride if they wish
# to to our returned result set and tidy entries by unquoting them
headers = CaseInsensitiveDict({unquote_plus(x): unquote_plus(y)
for x, y in results['qsd+'].items()})
# https://github.com/caronc/apprise/wiki/Notify_Custom_JSON#get-parameter-manipulation
# In Apprise, it relies on prefixing each request arg with "-", because it uses say &method=update as a flag for apprise
# but here we are making straight requests, so we need todo convert this against apprise's logic
for k, v in results['qsd'].items():
if not k.strip('+-') in results['qsd+'].keys():
params[unquote_plus(k)] = unquote_plus(v)
# Determine Authentication
auth = ''
if results.get('user') and results.get('password'):
auth = (unquote_plus(results.get('user')), unquote_plus(results.get('user')))
elif results.get('user'):
auth = (unquote_plus(results.get('user')))
# If it smells like it could be JSON and no content-type was already set, offer a default content type.
if body and '{' in body[:100] and not headers.get('Content-Type'):
json_header = 'application/json; charset=utf-8'
try:
# Try if it's JSON
json.loads(body)
headers['Content-Type'] = json_header
except ValueError as e:
logger.warning(f"Could not automatically add '{json_header}' header to the notification because the document failed to parse as JSON: {e}")
pass
# POSTS -> HTTPS etc
if schema.lower().endswith('s'):
url = re.sub(rf'^{schema}', 'https', results.get('url'))
else:
url = re.sub(rf'^{schema}', 'http', results.get('url'))
status_str = ''
try:
r = requests_method(url,
auth=auth,
data=body.encode('utf-8') if type(body) is str else body,
headers=headers,
params=params
)
if not (200 <= r.status_code < 300):
status_str = f"Error sending '{method.upper()}' request to {url} - Status: {r.status_code}: '{r.reason}'"
logger.error(status_str)
has_error = True
else:
logger.info(f"Sent '{method.upper()}' request to {url}")
has_error = False
except requests.RequestException as e:
status_str = f"Error sending '{method.upper()}' request to {url} - {str(e)}"
logger.error(status_str)
has_error = True
if has_error:
raise TypeError(status_str)
return True

View File

View File

@@ -0,0 +1,164 @@
import datetime
import glob
import threading
from flask import Blueprint, render_template, send_from_directory, flash, url_for, redirect, abort
import os
from changedetectionio.store import ChangeDetectionStore
from changedetectionio.flask_app import login_optionally_required
from loguru import logger
BACKUP_FILENAME_FORMAT = "changedetection-backup-{}.zip"
def create_backup(datastore_path, watches: dict):
logger.debug("Creating backup...")
import zipfile
from pathlib import Path
# create a ZipFile object
timestamp = datetime.datetime.now().strftime("%Y%m%d%H%M%S")
backupname = BACKUP_FILENAME_FORMAT.format(timestamp)
backup_filepath = os.path.join(datastore_path, backupname)
with zipfile.ZipFile(backup_filepath.replace('.zip', '.tmp'), "w",
compression=zipfile.ZIP_DEFLATED,
compresslevel=8) as zipObj:
# Add the index
zipObj.write(os.path.join(datastore_path, "url-watches.json"), arcname="url-watches.json")
# Add the flask app secret
zipObj.write(os.path.join(datastore_path, "secret.txt"), arcname="secret.txt")
# Add any data in the watch data directory.
for uuid, w in watches.items():
for f in Path(w.watch_data_dir).glob('*'):
zipObj.write(f,
# Use the full path to access the file, but make the file 'relative' in the Zip.
arcname=os.path.join(f.parts[-2], f.parts[-1]),
compress_type=zipfile.ZIP_DEFLATED,
compresslevel=8)
# Create a list file with just the URLs, so it's easier to port somewhere else in the future
list_file = "url-list.txt"
with open(os.path.join(datastore_path, list_file), "w") as f:
for uuid in watches:
url = watches[uuid]["url"]
f.write("{}\r\n".format(url))
list_with_tags_file = "url-list-with-tags.txt"
with open(
os.path.join(datastore_path, list_with_tags_file), "w"
) as f:
for uuid in watches:
url = watches[uuid].get('url')
tag = watches[uuid].get('tags', {})
f.write("{} {}\r\n".format(url, tag))
# Add it to the Zip
zipObj.write(
os.path.join(datastore_path, list_file),
arcname=list_file,
compress_type=zipfile.ZIP_DEFLATED,
compresslevel=8,
)
zipObj.write(
os.path.join(datastore_path, list_with_tags_file),
arcname=list_with_tags_file,
compress_type=zipfile.ZIP_DEFLATED,
compresslevel=8,
)
# Now it's done, rename it so it shows up finally and its completed being written.
os.rename(backup_filepath.replace('.zip', '.tmp'), backup_filepath.replace('.tmp', '.zip'))
def construct_blueprint(datastore: ChangeDetectionStore):
backups_blueprint = Blueprint('backups', __name__, template_folder="templates")
backup_threads = []
@login_optionally_required
@backups_blueprint.route("/request-backup", methods=['GET'])
def request_backup():
if any(thread.is_alive() for thread in backup_threads):
flash("A backup is already running, check back in a few minutes", "error")
return redirect(url_for('backups.index'))
if len(find_backups()) > int(os.getenv("MAX_NUMBER_BACKUPS", 100)):
flash("Maximum number of backups reached, please remove some", "error")
return redirect(url_for('backups.index'))
# Be sure we're written fresh
datastore.sync_to_json()
zip_thread = threading.Thread(target=create_backup, args=(datastore.datastore_path, datastore.data.get("watching")))
zip_thread.start()
backup_threads.append(zip_thread)
flash("Backup building in background, check back in a few minutes.")
return redirect(url_for('backups.index'))
def find_backups():
backup_filepath = os.path.join(datastore.datastore_path, BACKUP_FILENAME_FORMAT.format("*"))
backups = glob.glob(backup_filepath)
backup_info = []
for backup in backups:
size = os.path.getsize(backup) / (1024 * 1024)
creation_time = os.path.getctime(backup)
backup_info.append({
'filename': os.path.basename(backup),
'filesize': f"{size:.2f}",
'creation_time': creation_time
})
backup_info.sort(key=lambda x: x['creation_time'], reverse=True)
return backup_info
@login_optionally_required
@backups_blueprint.route("/download/<string:filename>", methods=['GET'])
def download_backup(filename):
import re
filename = filename.strip()
backup_filename_regex = BACKUP_FILENAME_FORMAT.format("\d+")
full_path = os.path.join(os.path.abspath(datastore.datastore_path), filename)
if not full_path.startswith(os.path.abspath(datastore.datastore_path)):
abort(404)
if filename == 'latest':
backups = find_backups()
filename = backups[0]['filename']
if not re.match(r"^" + backup_filename_regex + "$", filename):
abort(400) # Bad Request if the filename doesn't match the pattern
logger.debug(f"Backup download request for '{full_path}'")
return send_from_directory(os.path.abspath(datastore.datastore_path), filename, as_attachment=True)
@login_optionally_required
@backups_blueprint.route("/", methods=['GET'])
def index():
backups = find_backups()
output = render_template("overview.html",
available_backups=backups,
backup_running=any(thread.is_alive() for thread in backup_threads)
)
return output
@login_optionally_required
@backups_blueprint.route("/remove-backups", methods=['GET'])
def remove_backups():
backup_filepath = os.path.join(datastore.datastore_path, BACKUP_FILENAME_FORMAT.format("*"))
backups = glob.glob(backup_filepath)
for backup in backups:
os.unlink(backup)
flash("Backups were deleted.")
return redirect(url_for('backups.index'))
return backups_blueprint

View File

@@ -0,0 +1,36 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.html' import render_simple_field, render_field %}
<div class="edit-form">
<div class="box-wrap inner">
<h4>Backups</h4>
{% if backup_running %}
<p>
<strong>A backup is running!</strong>
</p>
{% endif %}
<p>
Here you can download and request a new backup, when a backup is completed you will see it listed below.
</p>
<br>
{% if available_backups %}
<ul>
{% for backup in available_backups %}
<li><a href="{{ url_for('backups.download_backup', filename=backup["filename"]) }}">{{ backup["filename"] }}</a> {{ backup["filesize"] }} Mb</li>
{% endfor %}
</ul>
{% else %}
<p>
<strong>No backups found.</strong>
</p>
{% endif %}
<a class="pure-button pure-button-primary" href="{{ url_for('backups.request_backup') }}">Create backup</a>
{% if available_backups %}
<a class="pure-button button-small button-error " href="{{ url_for('backups.remove_backups') }}">Remove backups</a>
{% endif %}
</div>
</div>
{% endblock %}

View File

@@ -0,0 +1,7 @@
- This needs an abstraction to directly handle the puppeteer connection methods
- Then remove the playwright stuff
- Remove hack redirect at line 65 changedetectionio/processors/__init__.py
The screenshots are base64 encoded/decoded which is very CPU intensive for large screenshots (in playwright) but not
in the direct puppeteer connection (they are binary end to end)

View File

@@ -0,0 +1,232 @@
# HORRIBLE HACK BUT WORKS :-) PR anyone?
#
# Why?
# `browsersteps_playwright_browser_interface.chromium.connect_over_cdp()` will only run once without async()
# - this flask app is not async()
# - A single timeout/keepalive which applies to the session made at .connect_over_cdp()
#
# So it means that we must unfortunately for now just keep a single timer since .connect_over_cdp() was run
# and know when that reaches timeout/keepalive :( when that time is up, restart the connection and tell the user
# that their time is up, insert another coin. (reload)
#
#
from changedetectionio.strtobool import strtobool
from flask import Blueprint, request, make_response
import os
from changedetectionio.store import ChangeDetectionStore
from changedetectionio.flask_app import login_optionally_required
from loguru import logger
browsersteps_sessions = {}
io_interface_context = None
def construct_blueprint(datastore: ChangeDetectionStore):
browser_steps_blueprint = Blueprint('browser_steps', __name__, template_folder="templates")
def start_browsersteps_session(watch_uuid):
from . import nonContext
from . import browser_steps
import time
global browsersteps_sessions
global io_interface_context
# We keep the playwright session open for many minutes
keepalive_seconds = int(os.getenv('BROWSERSTEPS_MINUTES_KEEPALIVE', 10)) * 60
browsersteps_start_session = {'start_time': time.time()}
# You can only have one of these running
# This should be very fine to leave running for the life of the application
# @idea - Make it global so the pool of watch fetchers can use it also
if not io_interface_context:
io_interface_context = nonContext.c_sync_playwright()
# Start the Playwright context, which is actually a nodejs sub-process and communicates over STDIN/STDOUT pipes
io_interface_context = io_interface_context.start()
keepalive_ms = ((keepalive_seconds + 3) * 1000)
base_url = os.getenv('PLAYWRIGHT_DRIVER_URL', '').strip('"')
a = "?" if not '?' in base_url else '&'
base_url += a + f"timeout={keepalive_ms}"
try:
browsersteps_start_session['browser'] = io_interface_context.chromium.connect_over_cdp(base_url)
except Exception as e:
if 'ECONNREFUSED' in str(e):
return make_response('Unable to start the Playwright Browser session, is it running?', 401)
else:
# Other errors, bad URL syntax, bad reply etc
return make_response(str(e), 401)
proxy_id = datastore.get_preferred_proxy_for_watch(uuid=watch_uuid)
proxy = None
if proxy_id:
proxy_url = datastore.proxy_list.get(proxy_id).get('url')
if proxy_url:
# Playwright needs separate username and password values
from urllib.parse import urlparse
parsed = urlparse(proxy_url)
proxy = {'server': proxy_url}
if parsed.username:
proxy['username'] = parsed.username
if parsed.password:
proxy['password'] = parsed.password
logger.debug(f"Browser Steps: UUID {watch_uuid} selected proxy {proxy_url}")
# Tell Playwright to connect to Chrome and setup a new session via our stepper interface
browsersteps_start_session['browserstepper'] = browser_steps.browsersteps_live_ui(
playwright_browser=browsersteps_start_session['browser'],
proxy=proxy,
start_url=datastore.data['watching'][watch_uuid].get('url'),
headers=datastore.data['watching'][watch_uuid].get('headers')
)
# For test
#browsersteps_start_session['browserstepper'].action_goto_url(value="http://example.com?time="+str(time.time()))
return browsersteps_start_session
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_start_session", methods=['GET'])
def browsersteps_start_session():
# A new session was requested, return sessionID
import uuid
global browsersteps_sessions
browsersteps_session_id = str(uuid.uuid4())
watch_uuid = request.args.get('uuid')
if not watch_uuid:
return make_response('No Watch UUID specified', 500)
logger.debug("Starting connection with playwright")
logger.debug("browser_steps.py connecting")
browsersteps_sessions[browsersteps_session_id] = start_browsersteps_session(watch_uuid)
logger.debug("Starting connection with playwright - done")
return {'browsersteps_session_id': browsersteps_session_id}
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_image", methods=['GET'])
def browser_steps_fetch_screenshot_image():
from flask import (
make_response,
request,
send_from_directory,
)
uuid = request.args.get('uuid')
step_n = int(request.args.get('step_n'))
watch = datastore.data['watching'].get(uuid)
filename = f"step_before-{step_n}.jpeg" if request.args.get('type', '') == 'before' else f"step_{step_n}.jpeg"
if step_n and watch and os.path.isfile(os.path.join(watch.watch_data_dir, filename)):
response = make_response(send_from_directory(directory=watch.watch_data_dir, path=filename))
response.headers['Content-type'] = 'image/jpeg'
response.headers['Cache-Control'] = 'no-cache, no-store, must-revalidate'
response.headers['Pragma'] = 'no-cache'
response.headers['Expires'] = 0
return response
else:
return make_response('Unable to fetch image, is the URL correct? does the watch exist? does the step_type-n.jpeg exist?', 401)
# A request for an action was received
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_update", methods=['POST'])
def browsersteps_ui_update():
import base64
import playwright._impl._errors
global browsersteps_sessions
from changedetectionio.blueprint.browser_steps import browser_steps
remaining =0
uuid = request.args.get('uuid')
browsersteps_session_id = request.args.get('browsersteps_session_id')
if not browsersteps_session_id:
return make_response('No browsersteps_session_id specified', 500)
if not browsersteps_sessions.get(browsersteps_session_id):
return make_response('No session exists under that ID', 500)
# Actions - step/apply/etc, do the thing and return state
if request.method == 'POST':
# @todo - should always be an existing session
step_operation = request.form.get('operation')
step_selector = request.form.get('selector')
step_optional_value = request.form.get('optional_value')
step_n = int(request.form.get('step_n'))
is_last_step = strtobool(request.form.get('is_last_step'))
# @todo try.. accept.. nice errors not popups..
try:
browsersteps_sessions[browsersteps_session_id]['browserstepper'].call_action(action_name=step_operation,
selector=step_selector,
optional_value=step_optional_value)
except Exception as e:
logger.error(f"Exception when calling step operation {step_operation} {str(e)}")
# Try to find something of value to give back to the user
return make_response(str(e).splitlines()[0], 401)
# Get visual selector ready/update its data (also use the current filter info from the page?)
# When the last 'apply' button was pressed
# @todo this adds overhead because the xpath selection is happening twice
u = browsersteps_sessions[browsersteps_session_id]['browserstepper'].page.url
if is_last_step and u:
(screenshot, xpath_data) = browsersteps_sessions[browsersteps_session_id]['browserstepper'].request_visualselector_data()
watch = datastore.data['watching'].get(uuid)
if watch:
watch.save_screenshot(screenshot=screenshot)
watch.save_xpath_data(data=xpath_data)
# if not this_session.page:
# cleanup_playwright_session()
# return make_response('Browser session ran out of time :( Please reload this page.', 401)
# Screenshots and other info only needed on requesting a step (POST)
try:
state = browsersteps_sessions[browsersteps_session_id]['browserstepper'].get_current_state()
except playwright._impl._api_types.Error as e:
return make_response("Browser session ran out of time :( Please reload this page."+str(e), 401)
# Use send_file() which is way faster than read/write loop on bytes
import json
from tempfile import mkstemp
from flask import send_file
tmp_fd, tmp_file = mkstemp(text=True, suffix=".json", prefix="changedetectionio-")
output = json.dumps({'screenshot': "data:image/jpeg;base64,{}".format(
base64.b64encode(state[0]).decode('ascii')),
'xpath_data': state[1],
'session_age_start': browsersteps_sessions[browsersteps_session_id]['browserstepper'].age_start,
'browser_time_remaining': round(remaining)
})
with os.fdopen(tmp_fd, 'w') as f:
f.write(output)
response = make_response(send_file(path_or_file=tmp_file,
mimetype='application/json; charset=UTF-8',
etag=True))
# No longer needed
os.unlink(tmp_file)
return response
return browser_steps_blueprint

View File

@@ -0,0 +1,312 @@
#!/usr/bin/env python3
import os
import time
import re
from random import randint
from loguru import logger
from changedetectionio.content_fetchers.base import manage_user_agent
from changedetectionio.safe_jinja import render as jinja_render
# Two flags, tell the JS which of the "Selector" or "Value" field should be enabled in the front end
# 0- off, 1- on
browser_step_ui_config = {'Choose one': '0 0',
# 'Check checkbox': '1 0',
# 'Click button containing text': '0 1',
# 'Scroll to bottom': '0 0',
# 'Scroll to element': '1 0',
# 'Scroll to top': '0 0',
# 'Switch to iFrame by index number': '0 1'
# 'Uncheck checkbox': '1 0',
# @todo
'Check checkbox': '1 0',
'Click X,Y': '0 1',
'Click element if exists': '1 0',
'Click element': '1 0',
'Click element containing text': '0 1',
'Click element containing text if exists': '0 1',
'Enter text in field': '1 1',
'Execute JS': '0 1',
# 'Extract text and use as filter': '1 0',
'Goto site': '0 0',
'Goto URL': '0 1',
'Press Enter': '0 0',
'Select by label': '1 1',
'Scroll down': '0 0',
'Uncheck checkbox': '1 0',
'Wait for seconds': '0 1',
'Wait for text': '0 1',
'Wait for text in element': '1 1',
# 'Press Page Down': '0 0',
# 'Press Page Up': '0 0',
# weird bug, come back to it later
}
# Good reference - https://playwright.dev/python/docs/input
# https://pythonmana.com/2021/12/202112162236307035.html
#
# ONLY Works in Playwright because we need the fullscreen screenshot
class steppable_browser_interface():
page = None
start_url = None
def __init__(self, start_url):
self.start_url = start_url
# Convert and perform "Click Button" for example
def call_action(self, action_name, selector=None, optional_value=None):
now = time.time()
call_action_name = re.sub('[^0-9a-zA-Z]+', '_', action_name.lower())
if call_action_name == 'choose_one':
return
logger.debug(f"> Action calling '{call_action_name}'")
# https://playwright.dev/python/docs/selectors#xpath-selectors
if selector and selector.startswith('/') and not selector.startswith('//'):
selector = "xpath=" + selector
action_handler = getattr(self, "action_" + call_action_name)
# Support for Jinja2 variables in the value and selector
if selector and ('{%' in selector or '{{' in selector):
selector = jinja_render(template_str=selector)
if optional_value and ('{%' in optional_value or '{{' in optional_value):
optional_value = jinja_render(template_str=optional_value)
action_handler(selector, optional_value)
self.page.wait_for_timeout(1.5 * 1000)
logger.debug(f"Call action done in {time.time()-now:.2f}s")
def action_goto_url(self, selector=None, value=None):
# self.page.set_viewport_size({"width": 1280, "height": 5000})
now = time.time()
response = self.page.goto(value, timeout=0, wait_until='load')
# Should be the same as the puppeteer_fetch.js methods, means, load with no timeout set (skip timeout)
#and also wait for seconds ?
#await page.waitForTimeout(1000);
#await page.waitForTimeout(extra_wait_ms);
logger.debug(f"Time to goto URL {time.time()-now:.2f}s")
return response
# Incase they request to go back to the start
def action_goto_site(self, selector=None, value=None):
return self.action_goto_url(value=self.start_url)
def action_click_element_containing_text(self, selector=None, value=''):
logger.debug("Clicking element containing text")
if not len(value.strip()):
return
elem = self.page.get_by_text(value)
if elem.count():
elem.first.click(delay=randint(200, 500), timeout=3000)
def action_click_element_containing_text_if_exists(self, selector=None, value=''):
logger.debug("Clicking element containing text if exists")
if not len(value.strip()):
return
elem = self.page.get_by_text(value)
logger.debug(f"Clicking element containing text - {elem.count()} elements found")
if elem.count():
elem.first.click(delay=randint(200, 500), timeout=3000)
else:
return
def action_enter_text_in_field(self, selector, value):
if not len(selector.strip()):
return
self.page.fill(selector, value, timeout=10 * 1000)
def action_execute_js(self, selector, value):
response = self.page.evaluate(value)
return response
def action_click_element(self, selector, value):
logger.debug("Clicking element")
if not len(selector.strip()):
return
self.page.click(selector=selector, timeout=30 * 1000, delay=randint(200, 500))
def action_click_element_if_exists(self, selector, value):
import playwright._impl._errors as _api_types
logger.debug("Clicking element if exists")
if not len(selector.strip()):
return
try:
self.page.click(selector, timeout=10 * 1000, delay=randint(200, 500))
except _api_types.TimeoutError as e:
return
except _api_types.Error as e:
# Element was there, but page redrew and now its long long gone
return
def action_click_x_y(self, selector, value):
if not re.match(r'^\s?\d+\s?,\s?\d+\s?$', value):
raise Exception("'Click X,Y' step should be in the format of '100 , 90'")
x, y = value.strip().split(',')
x = int(float(x.strip()))
y = int(float(y.strip()))
self.page.mouse.click(x=x, y=y, delay=randint(200, 500))
def action_scroll_down(self, selector, value):
# Some sites this doesnt work on for some reason
self.page.mouse.wheel(0, 600)
self.page.wait_for_timeout(1000)
def action_wait_for_seconds(self, selector, value):
self.page.wait_for_timeout(float(value.strip()) * 1000)
def action_wait_for_text(self, selector, value):
import json
v = json.dumps(value)
self.page.wait_for_function(f'document.querySelector("body").innerText.includes({v});', timeout=30000)
def action_wait_for_text_in_element(self, selector, value):
import json
s = json.dumps(selector)
v = json.dumps(value)
self.page.wait_for_function(f'document.querySelector({s}).innerText.includes({v});', timeout=30000)
# @todo - in the future make some popout interface to capture what needs to be set
# https://playwright.dev/python/docs/api/class-keyboard
def action_press_enter(self, selector, value):
self.page.keyboard.press("Enter", delay=randint(200, 500))
def action_press_page_up(self, selector, value):
self.page.keyboard.press("PageUp", delay=randint(200, 500))
def action_press_page_down(self, selector, value):
self.page.keyboard.press("PageDown", delay=randint(200, 500))
def action_check_checkbox(self, selector, value):
self.page.locator(selector).check(timeout=1000)
def action_uncheck_checkbox(self, selector, value):
self.page.locator(selector, timeout=1000).uncheck(timeout=1000)
# Responsible for maintaining a live 'context' with the chrome CDP
# @todo - how long do contexts live for anyway?
class browsersteps_live_ui(steppable_browser_interface):
context = None
page = None
render_extra_delay = 1
stale = False
# bump and kill this if idle after X sec
age_start = 0
headers = {}
# use a special driver, maybe locally etc
command_executor = os.getenv(
"PLAYWRIGHT_BROWSERSTEPS_DRIVER_URL"
)
# if not..
if not command_executor:
command_executor = os.getenv(
"PLAYWRIGHT_DRIVER_URL",
'ws://playwright-chrome:3000'
).strip('"')
browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
def __init__(self, playwright_browser, proxy=None, headers=None, start_url=None):
self.headers = headers or {}
self.age_start = time.time()
self.playwright_browser = playwright_browser
self.start_url = start_url
if self.context is None:
self.connect(proxy=proxy)
# Connect and setup a new context
def connect(self, proxy=None):
# Should only get called once - test that
keep_open = 1000 * 60 * 5
now = time.time()
# @todo handle multiple contexts, bind a unique id from the browser on each req?
self.context = self.playwright_browser.new_context(
accept_downloads=False, # Should never be needed
bypass_csp=True, # This is needed to enable JavaScript execution on GitHub and others
extra_http_headers=self.headers,
ignore_https_errors=True,
proxy=proxy,
service_workers=os.getenv('PLAYWRIGHT_SERVICE_WORKERS', 'allow'),
# Should be `allow` or `block` - sites like YouTube can transmit large amounts of data via Service Workers
user_agent=manage_user_agent(headers=self.headers),
)
self.page = self.context.new_page()
# self.page.set_default_navigation_timeout(keep_open)
self.page.set_default_timeout(keep_open)
# @todo probably this doesnt work
self.page.on(
"close",
self.mark_as_closed,
)
# Listen for all console events and handle errors
self.page.on("console", lambda msg: print(f"Browser steps console - {msg.type}: {msg.text} {msg.args}"))
logger.debug(f"Time to browser setup {time.time()-now:.2f}s")
self.page.wait_for_timeout(1 * 1000)
def mark_as_closed(self):
logger.debug("Page closed, cleaning up..")
@property
def has_expired(self):
if not self.page:
return True
def get_current_state(self):
"""Return the screenshot and interactive elements mapping, generally always called after action_()"""
import importlib.resources
xpath_element_js = importlib.resources.files("changedetectionio.content_fetchers.res").joinpath('xpath_element_scraper.js').read_text()
now = time.time()
self.page.wait_for_timeout(1 * 1000)
# The actual screenshot
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=40)
self.page.evaluate("var include_filters=''")
# Go find the interactive elements
# @todo in the future, something smarter that can scan for elements with .click/focus etc event handlers?
elements = 'a,button,input,select,textarea,i,th,td,p,li,h1,h2,h3,h4,div,span'
xpath_element_js = xpath_element_js.replace('%ELEMENTS%', elements)
xpath_data = self.page.evaluate("async () => {" + xpath_element_js + "}")
# So the JS will find the smallest one first
xpath_data['size_pos'] = sorted(xpath_data['size_pos'], key=lambda k: k['width'] * k['height'], reverse=True)
logger.debug(f"Time to complete get_current_state of browser {time.time()-now:.2f}s")
# except
# playwright._impl._api_types.Error: Browser closed.
# @todo show some countdown timer?
return (screenshot, xpath_data)
def request_visualselector_data(self):
"""
Does the same that the playwright operation in content_fetcher does
This is used to just bump the VisualSelector data so it' ready to go if they click on the tab
@todo refactor and remove duplicate code, add include_filters
:param xpath_data:
:param screenshot:
:param current_include_filters:
:return:
"""
import importlib.resources
self.page.evaluate("var include_filters=''")
xpath_element_js = importlib.resources.files("changedetectionio.content_fetchers.res").joinpath('xpath_element_scraper.js').read_text()
from changedetectionio.content_fetchers import visualselector_xpath_selectors
xpath_element_js = xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors)
xpath_data = self.page.evaluate("async () => {" + xpath_element_js + "}")
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
return (screenshot, xpath_data)

View File

@@ -0,0 +1,17 @@
from playwright.sync_api import PlaywrightContextManager
# So playwright wants to run as a context manager, but we do something horrible and hacky
# we are holding the session open for as long as possible, then shutting it down, and opening a new one
# So it means we don't get to use PlaywrightContextManager' __enter__ __exit__
# To work around this, make goodbye() act the same as the __exit__()
#
# But actually I think this is because the context is opened correctly with __enter__() but we timeout the connection
# then theres some lock condition where we cant destroy it without it hanging
class c_PlaywrightContextManager(PlaywrightContextManager):
def goodbye(self) -> None:
self.__exit__()
def c_sync_playwright() -> PlaywrightContextManager:
return c_PlaywrightContextManager()

View File

@@ -0,0 +1,124 @@
import importlib
from concurrent.futures import ThreadPoolExecutor
from changedetectionio.processors.text_json_diff.processor import FilterNotFoundInResponse
from changedetectionio.store import ChangeDetectionStore
from functools import wraps
from flask import Blueprint
from flask_login import login_required
STATUS_CHECKING = 0
STATUS_FAILED = 1
STATUS_OK = 2
THREADPOOL_MAX_WORKERS = 3
_DEFAULT_POOL = ThreadPoolExecutor(max_workers=THREADPOOL_MAX_WORKERS)
# Maybe use fetch-time if its >5 to show some expected load time?
def threadpool(f, executor=None):
@wraps(f)
def wrap(*args, **kwargs):
return (executor or _DEFAULT_POOL).submit(f, *args, **kwargs)
return wrap
def construct_blueprint(datastore: ChangeDetectionStore):
check_proxies_blueprint = Blueprint('check_proxies', __name__)
checks_in_progress = {}
@threadpool
def long_task(uuid, preferred_proxy):
import time
from changedetectionio.content_fetchers import exceptions as content_fetcher_exceptions
from changedetectionio.safe_jinja import render as jinja_render
status = {'status': '', 'length': 0, 'text': ''}
contents = ''
now = time.time()
try:
processor_module = importlib.import_module("changedetectionio.processors.text_json_diff.processor")
update_handler = processor_module.perform_site_check(datastore=datastore,
watch_uuid=uuid
)
update_handler.call_browser(preferred_proxy_id=preferred_proxy)
# title, size is len contents not len xfer
except content_fetcher_exceptions.Non200ErrorCodeReceived as e:
if e.status_code == 404:
status.update({'status': 'OK', 'length': len(contents), 'text': f"OK but 404 (page not found)"})
elif e.status_code == 403 or e.status_code == 401:
status.update({'status': 'ERROR', 'length': len(contents), 'text': f"{e.status_code} - Access denied"})
else:
status.update({'status': 'ERROR', 'length': len(contents), 'text': f"Status code: {e.status_code}"})
except FilterNotFoundInResponse:
status.update({'status': 'OK', 'length': len(contents), 'text': f"OK but CSS/xPath filter not found (page changed layout?)"})
except content_fetcher_exceptions.EmptyReply as e:
if e.status_code == 403 or e.status_code == 401:
status.update({'status': 'ERROR OTHER', 'length': len(contents), 'text': f"Got empty reply with code {e.status_code} - Access denied"})
else:
status.update({'status': 'ERROR OTHER', 'length': len(contents) if contents else 0, 'text': f"Empty reply with code {e.status_code}, needs chrome?"})
except content_fetcher_exceptions.ReplyWithContentButNoText as e:
txt = f"Got reply but with no content - Status code {e.status_code} - It's possible that the filters were found, but contained no usable text (or contained only an image)."
status.update({'status': 'ERROR', 'text': txt})
except Exception as e:
status.update({'status': 'ERROR OTHER', 'length': len(contents) if contents else 0, 'text': 'Error: '+type(e).__name__+str(e)})
else:
status.update({'status': 'OK', 'length': len(contents), 'text': ''})
if status.get('text'):
# parse 'text' as text for safety
v = {'text': status['text']}
status['text'] = jinja_render(template_str='{{text|e}}', **v)
status['time'] = "{:.2f}s".format(time.time() - now)
return status
def _recalc_check_status(uuid):
results = {}
for k, v in checks_in_progress.get(uuid, {}).items():
try:
r_1 = v.result(timeout=0.05)
except Exception as e:
# If timeout error?
results[k] = {'status': 'RUNNING'}
else:
results[k] = r_1
return results
@login_required
@check_proxies_blueprint.route("/<string:uuid>/status", methods=['GET'])
def get_recheck_status(uuid):
results = _recalc_check_status(uuid=uuid)
return results
@login_required
@check_proxies_blueprint.route("/<string:uuid>/start", methods=['GET'])
def start_check(uuid):
if not datastore.proxy_list:
return
if checks_in_progress.get(uuid):
state = _recalc_check_status(uuid=uuid)
for proxy_key, v in state.items():
if v.get('status') == 'RUNNING':
return state
else:
checks_in_progress[uuid] = {}
for k, v in datastore.proxy_list.items():
if not checks_in_progress[uuid].get(k):
checks_in_progress[uuid][k] = long_task(uuid=uuid, preferred_proxy=k)
results = _recalc_check_status(uuid=uuid)
return results
return check_proxies_blueprint

View File

@@ -0,0 +1,34 @@
from changedetectionio.strtobool import strtobool
from flask import Blueprint, flash, redirect, url_for
from flask_login import login_required
from changedetectionio.store import ChangeDetectionStore
from changedetectionio import queuedWatchMetaData
from queue import PriorityQueue
PRICE_DATA_TRACK_ACCEPT = 'accepted'
PRICE_DATA_TRACK_REJECT = 'rejected'
def construct_blueprint(datastore: ChangeDetectionStore, update_q: PriorityQueue):
price_data_follower_blueprint = Blueprint('price_data_follower', __name__)
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/accept", methods=['GET'])
def accept(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_ACCEPT
datastore.data['watching'][uuid]['processor'] = 'restock_diff'
datastore.data['watching'][uuid].clear_watch()
update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid}))
return redirect(url_for("index"))
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/reject", methods=['GET'])
def reject(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_REJECT
return redirect(url_for("index"))
return price_data_follower_blueprint

View File

@@ -0,0 +1,9 @@
# Groups tags
## How it works
Watch has a list() of tag UUID's, which relate to a config under application.settings.tags
The 'tag' is actually a watch, because they basically will eventually share 90% of the same config.
So a tag is like an abstract of a watch

View File

@@ -0,0 +1,188 @@
from flask import Blueprint, request, render_template, flash, url_for, redirect
from changedetectionio.store import ChangeDetectionStore
from changedetectionio.flask_app import login_optionally_required
def construct_blueprint(datastore: ChangeDetectionStore):
tags_blueprint = Blueprint('tags', __name__, template_folder="templates")
@tags_blueprint.route("/list", methods=['GET'])
@login_optionally_required
def tags_overview_page():
from .form import SingleTag
add_form = SingleTag(request.form)
sorted_tags = sorted(datastore.data['settings']['application'].get('tags').items(), key=lambda x: x[1]['title'])
from collections import Counter
tag_count = Counter(tag for watch in datastore.data['watching'].values() if watch.get('tags') for tag in watch['tags'])
output = render_template("groups-overview.html",
available_tags=sorted_tags,
form=add_form,
tag_count=tag_count
)
return output
@tags_blueprint.route("/add", methods=['POST'])
@login_optionally_required
def form_tag_add():
from .form import SingleTag
add_form = SingleTag(request.form)
if not add_form.validate():
for widget, l in add_form.errors.items():
flash(','.join(l), 'error')
return redirect(url_for('tags.tags_overview_page'))
title = request.form.get('name').strip()
if datastore.tag_exists_by_name(title):
flash(f'The tag "{title}" already exists', "error")
return redirect(url_for('tags.tags_overview_page'))
datastore.add_tag(title)
flash("Tag added")
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/mute/<string:uuid>", methods=['GET'])
@login_optionally_required
def mute(uuid):
if datastore.data['settings']['application']['tags'].get(uuid):
datastore.data['settings']['application']['tags'][uuid]['notification_muted'] = not datastore.data['settings']['application']['tags'][uuid]['notification_muted']
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/delete/<string:uuid>", methods=['GET'])
@login_optionally_required
def delete(uuid):
removed = 0
# Delete the tag, and any tag reference
if datastore.data['settings']['application']['tags'].get(uuid):
del datastore.data['settings']['application']['tags'][uuid]
for watch_uuid, watch in datastore.data['watching'].items():
if watch.get('tags') and uuid in watch['tags']:
removed += 1
watch['tags'].remove(uuid)
flash(f"Tag deleted and removed from {removed} watches")
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/unlink/<string:uuid>", methods=['GET'])
@login_optionally_required
def unlink(uuid):
unlinked = 0
for watch_uuid, watch in datastore.data['watching'].items():
if watch.get('tags') and uuid in watch['tags']:
unlinked += 1
watch['tags'].remove(uuid)
flash(f"Tag unlinked removed from {unlinked} watches")
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/delete_all", methods=['GET'])
@login_optionally_required
def delete_all():
for watch_uuid, watch in datastore.data['watching'].items():
watch['tags'] = []
datastore.data['settings']['application']['tags'] = {}
flash(f"All tags deleted")
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/edit/<string:uuid>", methods=['GET'])
@login_optionally_required
def form_tag_edit(uuid):
from changedetectionio.blueprint.tags.form import group_restock_settings_form
if uuid == 'first':
uuid = list(datastore.data['settings']['application']['tags'].keys()).pop()
default = datastore.data['settings']['application']['tags'].get(uuid)
form = group_restock_settings_form(
formdata=request.form if request.method == 'POST' else None,
data=default,
extra_notification_tokens=datastore.get_unique_notification_tokens_available(),
default_system_settings = datastore.data['settings'],
)
template_args = {
'data': default,
'form': form,
'watch': default,
'extra_notification_token_placeholder_info': datastore.get_unique_notification_token_placeholders_available(),
}
included_content = {}
if form.extra_form_content():
# So that the extra panels can access _helpers.html etc, we set the environment to load from templates/
# And then render the code from the module
from jinja2 import Environment, FileSystemLoader
import importlib.resources
templates_dir = str(importlib.resources.files("changedetectionio").joinpath('templates'))
env = Environment(loader=FileSystemLoader(templates_dir))
template_str = """{% from '_helpers.html' import render_field, render_checkbox_field, render_button %}
<script>
$(document).ready(function () {
toggleOpacity('#overrides_watch', '#restock-fieldset-price-group', true);
});
</script>
<fieldset>
<div class="pure-control-group">
<fieldset class="pure-group">
{{ render_checkbox_field(form.overrides_watch) }}
<span class="pure-form-message-inline">Used for watches in "Restock & Price detection" mode</span>
</fieldset>
</fieldset>
"""
template_str += form.extra_form_content()
template = env.from_string(template_str)
included_content = template.render(**template_args)
output = render_template("edit-tag.html",
settings_application=datastore.data['settings']['application'],
extra_tab_content=form.extra_tab_content() if form.extra_tab_content() else None,
extra_form_content=included_content,
**template_args
)
return output
@tags_blueprint.route("/edit/<string:uuid>", methods=['POST'])
@login_optionally_required
def form_tag_edit_submit(uuid):
from changedetectionio.blueprint.tags.form import group_restock_settings_form
if uuid == 'first':
uuid = list(datastore.data['settings']['application']['tags'].keys()).pop()
default = datastore.data['settings']['application']['tags'].get(uuid)
form = group_restock_settings_form(formdata=request.form if request.method == 'POST' else None,
data=default,
extra_notification_tokens=datastore.get_unique_notification_tokens_available()
)
# @todo subclass form so validation works
#if not form.validate():
# for widget, l in form.errors.items():
# flash(','.join(l), 'error')
# return redirect(url_for('tags.form_tag_edit_submit', uuid=uuid))
datastore.data['settings']['application']['tags'][uuid].update(form.data)
datastore.data['settings']['application']['tags'][uuid]['processor'] = 'restock_diff'
datastore.needs_write_urgent = True
flash("Updated")
return redirect(url_for('tags.tags_overview_page'))
@tags_blueprint.route("/delete/<string:uuid>", methods=['GET'])
def form_tag_delete(uuid):
return redirect(url_for('tags.tags_overview_page'))
return tags_blueprint

View File

@@ -0,0 +1,21 @@
from wtforms import (
Form,
StringField,
SubmitField,
validators,
)
from wtforms.fields.simple import BooleanField
from changedetectionio.processors.restock_diff.forms import processor_settings_form as restock_settings_form
class group_restock_settings_form(restock_settings_form):
overrides_watch = BooleanField('Activate for individual watches in this tag/group?', default=False)
class SingleTag(Form):
name = StringField('Tag name', [validators.InputRequired()], render_kw={"placeholder": "Name"})
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})

View File

@@ -0,0 +1,146 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.html' import render_field, render_checkbox_field, render_button %}
{% from '_common_fields.html' import render_common_settings_form %}
<script>
const notification_base_url="{{url_for('ajax_callback_send_notification_test', mode="group-settings")}}";
</script>
<script src="{{url_for('static_content', group='js', filename='tabs.js')}}" defer></script>
<script>
/*{% if emailprefix %}*/
/*const email_notification_prefix=JSON.parse('{{ emailprefix|tojson }}');*/
/*{% endif %}*/
</script>
<script src="{{url_for('static_content', group='js', filename='watch-settings.js')}}" defer></script>
<script src="{{url_for('static_content', group='js', filename='notifications.js')}}" defer></script>
<div class="edit-form monospaced-textarea">
<div class="tabs collapsable">
<ul>
<li class="tab" id=""><a href="#general">General</a></li>
<li class="tab"><a href="#filters-and-triggers">Filters &amp; Triggers</a></li>
{% if extra_tab_content %}
<li class="tab"><a href="#extras_tab">{{ extra_tab_content }}</a></li>
{% endif %}
<li class="tab"><a href="#notifications">Notifications</a></li>
</ul>
</div>
<div class="box-wrap inner">
<form class="pure-form pure-form-stacked"
action="{{ url_for('tags.form_tag_edit', uuid=data.uuid) }}" method="POST">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}">
<div class="tab-pane-inner" id="general">
<fieldset>
<div class="pure-control-group">
{{ render_field(form.title, placeholder="https://...", required=true, class="m-d") }}
</div>
</fieldset>
</div>
<div class="tab-pane-inner" id="filters-and-triggers">
<div class="pure-control-group">
{% set field = render_field(form.include_filters,
rows=5,
placeholder="#example
xpath://body/div/span[contains(@class, 'example-class')]",
class="m-d")
%}
{{ field }}
{% if '/text()' in field %}
<span class="pure-form-message-inline"><strong>Note!: //text() function does not work where the &lt;element&gt; contains &lt;![CDATA[]]&gt;</strong></span><br>
{% endif %}
<span class="pure-form-message-inline">One CSS, xPath, JSON Path/JQ selector per line, <i>any</i> rules that matches will be used.<br>
<div data-target="#advanced-help-selectors" class="toggle-show pure-button button-tag button-xsmall">Show advanced help and tips</div>
<ul id="advanced-help-selectors">
<li>CSS - Limit text to this CSS rule, only text matching this CSS rule is included.</li>
<li>JSON - Limit text to this JSON rule, using either <a href="https://pypi.org/project/jsonpath-ng/" target="new">JSONPath</a> or <a href="https://stedolan.github.io/jq/" target="new">jq</a> (if installed).
<ul>
<li>JSONPath: Prefix with <code>json:</code>, use <code>json:$</code> to force re-formatting if required, <a href="https://jsonpath.com/" target="new">test your JSONPath here</a>.</li>
{% if jq_support %}
<li>jq: Prefix with <code>jq:</code> and <a href="https://jqplay.org/" target="new">test your jq here</a>. Using <a href="https://stedolan.github.io/jq/" target="new">jq</a> allows for complex filtering and processing of JSON data with built-in functions, regex, filtering, and more. See examples and documentation <a href="https://stedolan.github.io/jq/manual/" target="new">here</a>. Prefix <code>jqraw:</code> outputs the results as text instead of a JSON list.</li>
{% else %}
<li>jq support not installed</li>
{% endif %}
</ul>
</li>
<li>XPath - Limit text to this XPath rule, simply start with a forward-slash. To specify XPath to be used explicitly or the XPath rule starts with an XPath function: Prefix with <code>xpath:</code>
<ul>
<li>Example: <code>//*[contains(@class, 'sametext')]</code> or <code>xpath:count(//*[contains(@class, 'sametext')])</code>, <a
href="http://xpather.com/" target="new">test your XPath here</a></li>
<li>Example: Get all titles from an RSS feed <code>//title/text()</code></li>
<li>To use XPath1.0: Prefix with <code>xpath1:</code></li>
</ul>
</li>
</ul>
Please be sure that you thoroughly understand how to write CSS, JSONPath, XPath{% if jq_support %}, or jq selector{%endif%} rules before filing an issue on GitHub! <a
href="https://github.com/dgtlmoon/changedetection.io/wiki/CSS-Selector-help">here for more CSS selector help</a>.<br>
</span>
</div>
<fieldset class="pure-control-group">
{{ render_field(form.subtractive_selectors, rows=5, placeholder="header
footer
nav
.stockticker
//*[contains(text(), 'Advertisement')]") }}
<span class="pure-form-message-inline">
<ul>
<li> Remove HTML element(s) by CSS and XPath selectors before text conversion. </li>
<li> Don't paste HTML here, use only CSS and XPath selectors </li>
<li> Add multiple elements, CSS or XPath selectors per line to ignore multiple parts of the HTML. </li>
</ul>
</span>
</fieldset>
</div>
{# rendered sub Template #}
{% if extra_form_content %}
<div class="tab-pane-inner" id="extras_tab">
{{ extra_form_content|safe }}
</div>
{% endif %}
<div class="tab-pane-inner" id="notifications">
<fieldset>
<div class="pure-control-group inline-radio">
{{ render_checkbox_field(form.notification_muted) }}
</div>
{% if is_html_webdriver %}
<div class="pure-control-group inline-radio">
{{ render_checkbox_field(form.notification_screenshot) }}
<span class="pure-form-message-inline">
<strong>Use with caution!</strong> This will easily fill up your email storage quota or flood other storages.
</span>
</div>
{% endif %}
<div class="field-group" id="notification-field-group">
{% if has_default_notification_urls %}
<div class="inline-warning">
<img class="inline-warning-icon" src="{{url_for('static_content', group='images', filename='notice.svg')}}" alt="Look out!" title="Lookout!" >
There are <a href="{{ url_for('settings_page')}}#notifications">system-wide notification URLs enabled</a>, this form will override notification settings for this watch only &dash; an empty Notification URL list here will still send notifications.
</div>
{% endif %}
<a href="#notifications" id="notification-setting-reset-to-default" class="pure-button button-xsmall" style="right: 20px; top: 20px; position: absolute; background-color: #5f42dd; border-radius: 4px; font-size: 70%; color: #fff">Use system defaults</a>
{{ render_common_settings_form(form, emailprefix, settings_application, extra_notification_token_placeholder_info) }}
</div>
</fieldset>
</div>
<div id="actions">
<div class="pure-control-group">
{{ render_button(form.save_button) }}
</div>
</div>
</form>
</div>
</div>
{% endblock %}

View File

@@ -0,0 +1,62 @@
{% extends 'base.html' %}
{% block content %}
{% from '_helpers.html' import render_simple_field, render_field %}
<script src="{{url_for('static_content', group='js', filename='jquery-3.6.0.min.js')}}"></script>
<div class="box">
<form class="pure-form" action="{{ url_for('tags.form_tag_add') }}" method="POST" id="new-watch-form">
<input type="hidden" name="csrf_token" value="{{ csrf_token() }}" >
<fieldset>
<legend>Add a new organisational tag</legend>
<div id="watch-add-wrapper-zone">
<div>
{{ render_simple_field(form.name, placeholder="watch label / tag") }}
</div>
<div>
{{ render_simple_field(form.save_button, title="Save" ) }}
</div>
</div>
<br>
<div style="color: #fff;">Groups allows you to manage filters and notifications for multiple watches under a single organisational tag.</div>
</fieldset>
</form>
<!-- @todo maybe some overview matrix, 'tick' with which has notification, filter rules etc -->
<div id="watch-table-wrapper">
<table class="pure-table pure-table-striped watch-table group-overview-table">
<thead>
<tr>
<th></th>
<th># Watches</th>
<th>Tag / Label name</th>
<th></th>
</tr>
</thead>
<tbody>
<!--
@Todo - connect Last checked, Last Changed, Number of Watches etc
--->
{% if not available_tags|length %}
<tr>
<td colspan="3">No website organisational tags/groups configured</td>
</tr>
{% endif %}
{% for uuid, tag in available_tags %}
<tr id="{{ uuid }}" class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }}">
<td class="watch-controls">
<a class="link-mute state-{{'on' if tag.notification_muted else 'off'}}" href="{{url_for('tags.mute', uuid=tag.uuid)}}"><img src="{{url_for('static_content', group='images', filename='bell-off.svg')}}" alt="Mute notifications" title="Mute notifications" class="icon icon-mute" ></a>
</td>
<td>{{ "{:,}".format(tag_count[uuid]) if uuid in tag_count else 0 }}</td>
<td class="title-col inline"> <a href="{{url_for('index', tag=uuid) }}">{{ tag.title }}</a></td>
<td>
<a class="pure-button pure-button-primary" href="{{ url_for('tags.form_tag_edit', uuid=uuid) }}">Edit</a>&nbsp;
<a class="pure-button pure-button-primary" href="{{ url_for('tags.delete', uuid=uuid) }}" title="Deletes and removes tag">Delete</a>
<a class="pure-button pure-button-primary" href="{{ url_for('tags.unlink', uuid=uuid) }}" title="Keep the tag but unlink any watches">Unlink</a>
</td>
</tr>
{% endfor %}
</tbody>
</table>
</div>
</div>
{% endblock %}

View File

@@ -0,0 +1,45 @@
import sys
from changedetectionio.strtobool import strtobool
from loguru import logger
from changedetectionio.content_fetchers.exceptions import BrowserStepsStepException
import os
# Visual Selector scraper - 'Button' is there because some sites have <button>OUT OF STOCK</button>.
visualselector_xpath_selectors = 'div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4,header,footer,section,article,aside,details,main,nav,section,summary,button'
# available_fetchers() will scan this implementation looking for anything starting with html_
# this information is used in the form selections
from changedetectionio.content_fetchers.requests import fetcher as html_requests
def available_fetchers():
# See the if statement at the bottom of this file for how we switch between playwright and webdriver
import inspect
p = []
for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if name.startswith('html_'):
t = tuple([name, obj.fetcher_description])
p.append(t)
return p
# Decide which is the 'real' HTML webdriver, this is more a system wide config
# rather than site-specific.
use_playwright_as_chrome_fetcher = os.getenv('PLAYWRIGHT_DRIVER_URL', False)
if use_playwright_as_chrome_fetcher:
# @note - For now, browser steps always uses playwright
if not strtobool(os.getenv('FAST_PUPPETEER_CHROME_FETCHER', 'False')):
logger.debug('Using Playwright library as fetcher')
from .playwright import fetcher as html_webdriver
else:
logger.debug('Using direct Python Puppeteer library as fetcher')
from .puppeteer import fetcher as html_webdriver
else:
logger.debug("Falling back to selenium as fetcher")
from .webdriver_selenium import fetcher as html_webdriver

View File

@@ -0,0 +1,179 @@
import os
from abc import abstractmethod
from loguru import logger
from changedetectionio.content_fetchers import BrowserStepsStepException
def manage_user_agent(headers, current_ua=''):
"""
Basic setting of user-agent
NOTE!!!!!! The service that does the actual Chrome fetching should handle any anti-robot techniques
THERE ARE MANY WAYS THAT IT CAN BE DETECTED AS A ROBOT!!
This does not take care of
- Scraping of 'navigator' (platform, productSub, vendor, oscpu etc etc) browser object (navigator.appVersion) etc
- TCP/IP fingerprint JA3 etc
- Graphic rendering fingerprinting
- Your IP being obviously in a pool of bad actors
- Too many requests
- Scraping of SCH-UA browser replies (thanks google!!)
- Scraping of ServiceWorker, new window calls etc
See https://filipvitas.medium.com/how-to-set-user-agent-header-with-puppeteer-js-and-not-fail-28c7a02165da
Puppeteer requests https://github.com/dgtlmoon/pyppeteerstealth
:param page:
:param headers:
:return:
"""
# Ask it what the user agent is, if its obviously ChromeHeadless, switch it to the default
ua_in_custom_headers = headers.get('User-Agent')
if ua_in_custom_headers:
return ua_in_custom_headers
if not ua_in_custom_headers and current_ua:
current_ua = current_ua.replace('HeadlessChrome', 'Chrome')
return current_ua
return None
class Fetcher():
browser_connection_is_custom = None
browser_connection_url = None
browser_steps = None
browser_steps_screenshot_path = None
content = None
error = None
fetcher_description = "No description"
headers = {}
instock_data = None
instock_data_js = ""
status_code = None
webdriver_js_execute_code = None
xpath_data = None
xpath_element_js = ""
# Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False
system_http_proxy = os.getenv('HTTP_PROXY')
system_https_proxy = os.getenv('HTTPS_PROXY')
# Time ONTOP of the system defined env minimum time
render_extract_delay = 0
def __init__(self):
import importlib.resources
self.xpath_element_js = importlib.resources.files("changedetectionio.content_fetchers.res").joinpath('xpath_element_scraper.js').read_text(encoding='utf-8')
self.instock_data_js = importlib.resources.files("changedetectionio.content_fetchers.res").joinpath('stock-not-in-stock.js').read_text(encoding='utf-8')
@abstractmethod
def get_error(self):
return self.error
@abstractmethod
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False,
empty_pages_are_a_change=False):
# Should set self.error, self.status_code and self.content
pass
@abstractmethod
def quit(self):
return
@abstractmethod
def get_last_status_code(self):
return self.status_code
@abstractmethod
def screenshot_step(self, step_n):
if self.browser_steps_screenshot_path and not os.path.isdir(self.browser_steps_screenshot_path):
logger.debug(f"> Creating data dir {self.browser_steps_screenshot_path}")
os.mkdir(self.browser_steps_screenshot_path)
return None
@abstractmethod
# Return true/false if this checker is ready to run, in the case it needs todo some special config check etc
def is_ready(self):
return True
def get_all_headers(self):
"""
Get all headers but ensure all keys are lowercase
:return:
"""
return {k.lower(): v for k, v in self.headers.items()}
def browser_steps_get_valid_steps(self):
if self.browser_steps is not None and len(self.browser_steps):
valid_steps = list(filter(
lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one'),
self.browser_steps))
# Just incase they selected Goto site by accident with older JS
if valid_steps and valid_steps[0]['operation'] == 'Goto site':
del(valid_steps[0])
return valid_steps
return None
def iterate_browser_steps(self, start_url=None):
from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface
from playwright._impl._errors import TimeoutError, Error
from changedetectionio.safe_jinja import render as jinja_render
step_n = 0
if self.browser_steps is not None and len(self.browser_steps):
interface = steppable_browser_interface(start_url=start_url)
interface.page = self.page
valid_steps = self.browser_steps_get_valid_steps()
for step in valid_steps:
step_n += 1
logger.debug(f">> Iterating check - browser Step n {step_n} - {step['operation']}...")
self.screenshot_step("before-" + str(step_n))
self.save_step_html("before-" + str(step_n))
try:
optional_value = step['optional_value']
selector = step['selector']
# Support for jinja2 template in step values, with date module added
if '{%' in step['optional_value'] or '{{' in step['optional_value']:
optional_value = jinja_render(template_str=step['optional_value'])
if '{%' in step['selector'] or '{{' in step['selector']:
selector = jinja_render(template_str=step['selector'])
getattr(interface, "call_action")(action_name=step['operation'],
selector=selector,
optional_value=optional_value)
self.screenshot_step(step_n)
self.save_step_html(step_n)
except (Error, TimeoutError) as e:
logger.debug(str(e))
# Stop processing here
raise BrowserStepsStepException(step_n=step_n, original_e=e)
# It's always good to reset these
def delete_browser_steps_screenshots(self):
import glob
if self.browser_steps_screenshot_path is not None:
dest = os.path.join(self.browser_steps_screenshot_path, 'step_*.jpeg')
files = glob.glob(dest)
for f in files:
if os.path.isfile(f):
os.unlink(f)
def save_step_html(self, step_n):
if self.browser_steps_screenshot_path and not os.path.isdir(self.browser_steps_screenshot_path):
logger.debug(f"> Creating data dir {self.browser_steps_screenshot_path}")
os.mkdir(self.browser_steps_screenshot_path)
pass

View File

@@ -0,0 +1,97 @@
from loguru import logger
class Non200ErrorCodeReceived(Exception):
def __init__(self, status_code, url, screenshot=None, xpath_data=None, page_html=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.xpath_data = xpath_data
self.page_text = None
if page_html:
from changedetectionio import html_tools
self.page_text = html_tools.html_to_text(page_html)
return
class checksumFromPreviousCheckWasTheSame(Exception):
def __init__(self):
return
class JSActionExceptions(Exception):
def __init__(self, status_code, url, screenshot, message=''):
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.message = message
return
class BrowserConnectError(Exception):
msg = ''
def __init__(self, msg):
self.msg = msg
logger.error(f"Browser connection error {msg}")
return
class BrowserFetchTimedOut(Exception):
msg = ''
def __init__(self, msg):
self.msg = msg
logger.error(f"Browser processing took too long - {msg}")
return
class BrowserStepsStepException(Exception):
def __init__(self, step_n, original_e):
self.step_n = step_n
self.original_e = original_e
logger.debug(f"Browser Steps exception at step {self.step_n} {str(original_e)}")
return
# @todo - make base Exception class that announces via logger()
class PageUnloadable(Exception):
def __init__(self, status_code=None, url='', message='', screenshot=False):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.message = message
return
class BrowserStepsInUnsupportedFetcher(Exception):
def __init__(self, url):
self.url = url
return
class EmptyReply(Exception):
def __init__(self, status_code, url, screenshot=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
return
class ScreenshotUnavailable(Exception):
def __init__(self, status_code, url, page_html=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
if page_html:
from changedetectionio.html_tools import html_to_text
self.page_text = html_to_text(page_html)
return
class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url, screenshot=None, has_filters=False, html_content='', xpath_data=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.has_filters = has_filters
self.html_content = html_content
self.xpath_data = xpath_data
return

View File

@@ -0,0 +1,211 @@
import json
import os
from urllib.parse import urlparse
from loguru import logger
from changedetectionio.content_fetchers.base import Fetcher, manage_user_agent
from changedetectionio.content_fetchers.exceptions import PageUnloadable, Non200ErrorCodeReceived, EmptyReply, ScreenshotUnavailable
class fetcher(Fetcher):
fetcher_description = "Playwright {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
)
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
fetcher_description += " via '{}'".format(os.getenv("PLAYWRIGHT_DRIVER_URL"))
browser_type = ''
command_executor = ''
# Configs for Proxy setup
# In the ENV vars, is prefixed with "playwright_proxy_", so it is for example "playwright_proxy_server"
playwright_proxy_settings_mappings = ['bypass', 'server', 'username', 'password']
proxy = None
def __init__(self, proxy_override=None, custom_browser_connection_url=None):
super().__init__()
self.browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
if custom_browser_connection_url:
self.browser_connection_is_custom = True
self.browser_connection_url = custom_browser_connection_url
else:
# Fallback to fetching from system
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.browser_connection_url = os.getenv("PLAYWRIGHT_DRIVER_URL", 'ws://playwright-chrome:3000').strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.playwright_proxy_settings_mappings:
v = os.getenv('playwright_proxy_' + k, False)
if v:
proxy_args[k] = v.strip('"')
if proxy_args:
self.proxy = proxy_args
# allow per-watch proxy selection override
if proxy_override:
self.proxy = {'server': proxy_override}
if self.proxy:
# Playwright needs separate username and password values
parsed = urlparse(self.proxy.get('server'))
if parsed.username:
self.proxy['username'] = parsed.username
self.proxy['password'] = parsed.password
def screenshot_step(self, step_n=''):
super().screenshot_step(step_n=step_n)
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
if self.browser_steps_screenshot_path is not None:
destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.jpeg'.format(step_n))
logger.debug(f"Saving step screenshot to {destination}")
with open(destination, 'wb') as f:
f.write(screenshot)
def save_step_html(self, step_n):
super().save_step_html(step_n=step_n)
content = self.page.content()
destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.html'.format(step_n))
logger.debug(f"Saving step HTML to {destination}")
with open(destination, 'w') as f:
f.write(content)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False,
empty_pages_are_a_change=False):
from playwright.sync_api import sync_playwright
import playwright._impl._errors
from changedetectionio.content_fetchers import visualselector_xpath_selectors
self.delete_browser_steps_screenshots()
response = None
with sync_playwright() as p:
browser_type = getattr(p, self.browser_type)
# Seemed to cause a connection Exception even tho I can see it connect
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
# 60,000 connection timeout only
browser = browser_type.connect_over_cdp(self.browser_connection_url, timeout=60000)
# SOCKS5 with authentication is not supported (yet)
# https://github.com/microsoft/playwright/issues/10567
# Set user agent to prevent Cloudflare from blocking the browser
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context(
accept_downloads=False, # Should never be needed
bypass_csp=True, # This is needed to enable JavaScript execution on GitHub and others
extra_http_headers=request_headers,
ignore_https_errors=True,
proxy=self.proxy,
service_workers=os.getenv('PLAYWRIGHT_SERVICE_WORKERS', 'allow'), # Should be `allow` or `block` - sites like YouTube can transmit large amounts of data via Service Workers
user_agent=manage_user_agent(headers=request_headers),
)
self.page = context.new_page()
# Listen for all console events and handle errors
self.page.on("console", lambda msg: print(f"Playwright console: Watch URL: {url} {msg.type}: {msg.text} {msg.args}"))
# Re-use as much code from browser steps as possible so its the same
from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface
browsersteps_interface = steppable_browser_interface(start_url=url)
browsersteps_interface.page = self.page
response = browsersteps_interface.action_goto_url(value=url)
self.headers = response.all_headers()
if response is None:
context.close()
browser.close()
logger.debug("Content Fetcher > Response object from the browser communication was none")
raise EmptyReply(url=url, status_code=None)
try:
if self.webdriver_js_execute_code is not None and len(self.webdriver_js_execute_code):
browsersteps_interface.action_execute_js(value=self.webdriver_js_execute_code, selector=None)
except playwright._impl._errors.TimeoutError as e:
context.close()
browser.close()
# This can be ok, we will try to grab what we could retrieve
pass
except Exception as e:
logger.debug(f"Content Fetcher > Other exception when executing custom JS code {str(e)}")
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
self.page.wait_for_timeout(extra_wait * 1000)
try:
self.status_code = response.status
except Exception as e:
# https://github.com/dgtlmoon/changedetection.io/discussions/2122#discussioncomment-8241962
logger.critical(f"Response from the browser/Playwright did not have a status_code! Response follows.")
logger.critical(response)
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
if self.status_code != 200 and not ignore_status_codes:
screenshot = self.page.screenshot(type='jpeg', full_page=True,
quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
raise Non200ErrorCodeReceived(url=url, status_code=self.status_code, screenshot=screenshot)
if not empty_pages_are_a_change and len(self.page.content().strip()) == 0:
logger.debug("Content Fetcher > Content was empty, empty_pages_are_a_change = False")
context.close()
browser.close()
raise EmptyReply(url=url, status_code=response.status)
# Run Browser Steps here
if self.browser_steps_get_valid_steps():
self.iterate_browser_steps(start_url=url)
self.page.wait_for_timeout(extra_wait * 1000)
# So we can find an element on the page where its selector was entered manually (maybe not xPath etc)
if current_include_filters is not None:
self.page.evaluate("var include_filters={}".format(json.dumps(current_include_filters)))
else:
self.page.evaluate("var include_filters=''")
self.xpath_data = self.page.evaluate(
"async () => {" + self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors) + "}")
self.instock_data = self.page.evaluate("async () => {" + self.instock_data_js + "}")
self.content = self.page.content()
# Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large
# Screenshots also travel via the ws:// (websocket) meaning that the binary data is base64 encoded
# which will significantly increase the IO size between the server and client, it's recommended to use the lowest
# acceptable screenshot quality here
try:
# The actual screenshot - this always base64 and needs decoding! horrible! huge CPU usage
self.screenshot = self.page.screenshot(type='jpeg',
full_page=True,
quality=int(os.getenv("SCREENSHOT_QUALITY", 72)),
)
except Exception as e:
# It's likely the screenshot was too long/big and something crashed
raise ScreenshotUnavailable(url=url, status_code=self.status_code)
finally:
context.close()
browser.close()

View File

@@ -0,0 +1,272 @@
import asyncio
import json
import os
import websockets.exceptions
from urllib.parse import urlparse
from loguru import logger
from changedetectionio.content_fetchers.base import Fetcher, manage_user_agent
from changedetectionio.content_fetchers.exceptions import PageUnloadable, Non200ErrorCodeReceived, EmptyReply, BrowserFetchTimedOut, BrowserConnectError
class fetcher(Fetcher):
fetcher_description = "Puppeteer/direct {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
)
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
fetcher_description += " via '{}'".format(os.getenv("PLAYWRIGHT_DRIVER_URL"))
browser_type = ''
command_executor = ''
proxy = None
def __init__(self, proxy_override=None, custom_browser_connection_url=None):
super().__init__()
if custom_browser_connection_url:
self.browser_connection_is_custom = True
self.browser_connection_url = custom_browser_connection_url
else:
# Fallback to fetching from system
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.browser_connection_url = os.getenv("PLAYWRIGHT_DRIVER_URL", 'ws://playwright-chrome:3000').strip('"')
# allow per-watch proxy selection override
# @todo check global too?
if proxy_override:
# Playwright needs separate username and password values
parsed = urlparse(proxy_override)
if parsed:
self.proxy = {'username': parsed.username, 'password': parsed.password}
# Add the proxy server chrome start option, the username and password never gets added here
# (It always goes in via await self.page.authenticate(self.proxy))
# @todo filter some injection attack?
# check scheme when no scheme
proxy_url = parsed.scheme + "://" if parsed.scheme else 'http://'
r = "?" if not '?' in self.browser_connection_url else '&'
port = ":"+str(parsed.port) if parsed.port else ''
q = "?"+parsed.query if parsed.query else ''
proxy_url += f"{parsed.hostname}{port}{parsed.path}{q}"
self.browser_connection_url += f"{r}--proxy-server={proxy_url}"
# def screenshot_step(self, step_n=''):
# screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=85)
#
# if self.browser_steps_screenshot_path is not None:
# destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.jpeg'.format(step_n))
# logger.debug(f"Saving step screenshot to {destination}")
# with open(destination, 'wb') as f:
# f.write(screenshot)
#
# def save_step_html(self, step_n):
# content = self.page.content()
# destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.html'.format(step_n))
# logger.debug(f"Saving step HTML to {destination}")
# with open(destination, 'w') as f:
# f.write(content)
async def fetch_page(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes,
current_include_filters,
is_binary,
empty_pages_are_a_change
):
from changedetectionio.content_fetchers import visualselector_xpath_selectors
self.delete_browser_steps_screenshots()
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
from pyppeteer import Pyppeteer
pyppeteer_instance = Pyppeteer()
# Connect directly using the specified browser_ws_endpoint
# @todo timeout
try:
browser = await pyppeteer_instance.connect(browserWSEndpoint=self.browser_connection_url,
ignoreHTTPSErrors=True
)
except websockets.exceptions.InvalidStatusCode as e:
raise BrowserConnectError(msg=f"Error while trying to connect the browser, Code {e.status_code} (check your access, whitelist IP, password etc)")
except websockets.exceptions.InvalidURI:
raise BrowserConnectError(msg=f"Error connecting to the browser, check your browser connection address (should be ws:// or wss://")
except Exception as e:
raise BrowserConnectError(msg=f"Error connecting to the browser {str(e)}")
# Better is to launch chrome with the URL as arg
# non-headless - newPage() will launch an extra tab/window, .browser should already contain 1 page/tab
# headless - ask a new page
self.page = (pages := await browser.pages) and len(pages) or await browser.newPage()
try:
from pyppeteerstealth import inject_evasions_into_page
except ImportError:
logger.debug("pyppeteerstealth module not available, skipping")
pass
else:
# I tried hooking events via self.page.on(Events.Page.DOMContentLoaded, inject_evasions_requiring_obj_to_page)
# But I could never get it to fire reliably, so we just inject it straight after
await inject_evasions_into_page(self.page)
# This user agent is similar to what was used when tweaking the evasions in inject_evasions_into_page(..)
user_agent = None
if request_headers and request_headers.get('User-Agent'):
# Request_headers should now be CaaseInsensitiveDict
# Remove it so it's not sent again with headers after
user_agent = request_headers.pop('User-Agent').strip()
await self.page.setUserAgent(user_agent)
if not user_agent:
# Attempt to strip 'HeadlessChrome' etc
await self.page.setUserAgent(manage_user_agent(headers=request_headers, current_ua=await self.page.evaluate('navigator.userAgent')))
await self.page.setBypassCSP(True)
if request_headers:
await self.page.setExtraHTTPHeaders(request_headers)
# SOCKS5 with authentication is not supported (yet)
# https://github.com/microsoft/playwright/issues/10567
self.page.setDefaultNavigationTimeout(0)
await self.page.setCacheEnabled(True)
if self.proxy and self.proxy.get('username'):
# Setting Proxy-Authentication header is deprecated, and doing so can trigger header change errors from Puppeteer
# https://github.com/puppeteer/puppeteer/issues/676 ?
# https://help.brightdata.com/hc/en-us/articles/12632549957649-Proxy-Manager-How-to-Guides#h_01HAKWR4Q0AFS8RZTNYWRDFJC2
# https://cri.dev/posts/2020-03-30-How-to-solve-Puppeteer-Chrome-Error-ERR_INVALID_ARGUMENT/
await self.page.authenticate(self.proxy)
# Re-use as much code from browser steps as possible so its the same
# from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface
# not yet used here, we fallback to playwright when browsersteps is required
# browsersteps_interface = steppable_browser_interface()
# browsersteps_interface.page = self.page
response = await self.page.goto(url, waitUntil="load")
if response is None:
await self.page.close()
await browser.close()
logger.warning("Content Fetcher > Response object was none (as in, the response from the browser was empty, not just the content)")
raise EmptyReply(url=url, status_code=None)
self.headers = response.headers
try:
if self.webdriver_js_execute_code is not None and len(self.webdriver_js_execute_code):
await self.page.evaluate(self.webdriver_js_execute_code)
except Exception as e:
logger.warning("Got exception when running evaluate on custom JS code")
logger.error(str(e))
await self.page.close()
await browser.close()
# This can be ok, we will try to grab what we could retrieve
raise PageUnloadable(url=url, status_code=None, message=str(e))
try:
self.status_code = response.status
except Exception as e:
# https://github.com/dgtlmoon/changedetection.io/discussions/2122#discussioncomment-8241962
logger.critical(f"Response from the browser/Playwright did not have a status_code! Response follows.")
logger.critical(response)
await self.page.close()
await browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
if self.status_code != 200 and not ignore_status_codes:
screenshot = await self.page.screenshot(type_='jpeg',
fullPage=True,
quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
raise Non200ErrorCodeReceived(url=url, status_code=self.status_code, screenshot=screenshot)
content = await self.page.content
if not empty_pages_are_a_change and len(content.strip()) == 0:
logger.error("Content Fetcher > Content was empty (empty_pages_are_a_change is False), closing browsers")
await self.page.close()
await browser.close()
raise EmptyReply(url=url, status_code=response.status)
# Run Browser Steps here
# @todo not yet supported, we switch to playwright in this case
# if self.browser_steps_get_valid_steps():
# self.iterate_browser_steps()
await asyncio.sleep(1 + extra_wait)
# So we can find an element on the page where its selector was entered manually (maybe not xPath etc)
# Setup the xPath/VisualSelector scraper
if current_include_filters is not None:
js = json.dumps(current_include_filters)
await self.page.evaluate(f"var include_filters={js}")
else:
await self.page.evaluate(f"var include_filters=''")
self.xpath_data = await self.page.evaluate(
"async () => {" + self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors) + "}")
self.instock_data = await self.page.evaluate("async () => {" + self.instock_data_js + "}")
self.content = await self.page.content
# Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large
# Screenshots also travel via the ws:// (websocket) meaning that the binary data is base64 encoded
# which will significantly increase the IO size between the server and client, it's recommended to use the lowest
# acceptable screenshot quality here
try:
self.screenshot = await self.page.screenshot(type_='jpeg',
fullPage=True,
quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
except Exception as e:
logger.error("Error fetching screenshot")
# // May fail on very large pages with 'WARNING: tile memory limits exceeded, some content may not draw'
# // @ todo after text extract, we can place some overlay text with red background to say 'croppped'
logger.error('ERROR: content-fetcher page was maybe too large for a screenshot, reverting to viewport only screenshot')
try:
self.screenshot = await self.page.screenshot(type_='jpeg',
fullPage=False,
quality=int(os.getenv("SCREENSHOT_QUALITY", 72)))
except Exception as e:
logger.error('ERROR: Failed to get viewport-only reduced screenshot :(')
pass
finally:
# It's good to log here in the case that the browser crashes on shutting down but we still get the data we need
logger.success(f"Fetching '{url}' complete, closing page")
await self.page.close()
logger.success(f"Fetching '{url}' complete, closing browser")
await browser.close()
logger.success(f"Fetching '{url}' complete, exiting puppeteer fetch.")
async def main(self, **kwargs):
await self.fetch_page(**kwargs)
def run(self, url, timeout, request_headers, request_body, request_method, ignore_status_codes=False,
current_include_filters=None, is_binary=False, empty_pages_are_a_change=False):
#@todo make update_worker async which could run any of these content_fetchers within memory and time constraints
max_time = os.getenv('PUPPETEER_MAX_PROCESSING_TIMEOUT_SECONDS', 180)
# This will work in 3.10 but not >= 3.11 because 3.11 wants tasks only
try:
asyncio.run(asyncio.wait_for(self.main(
url=url,
timeout=timeout,
request_headers=request_headers,
request_body=request_body,
request_method=request_method,
ignore_status_codes=ignore_status_codes,
current_include_filters=current_include_filters,
is_binary=is_binary,
empty_pages_are_a_change=empty_pages_are_a_change
), timeout=max_time))
except asyncio.TimeoutError:
raise(BrowserFetchTimedOut(msg=f"Browser connected but was unable to process the page in {max_time} seconds."))

View File

@@ -0,0 +1,98 @@
from loguru import logger
import hashlib
import os
from changedetectionio import strtobool
from changedetectionio.content_fetchers.exceptions import BrowserStepsInUnsupportedFetcher, EmptyReply, Non200ErrorCodeReceived
from changedetectionio.content_fetchers.base import Fetcher
# "html_requests" is listed as the default fetcher in store.py!
class fetcher(Fetcher):
fetcher_description = "Basic fast Plaintext/HTTP Client"
def __init__(self, proxy_override=None, custom_browser_connection_url=None):
super().__init__()
self.proxy_override = proxy_override
# browser_connection_url is none because its always 'launched locally'
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False,
empty_pages_are_a_change=False):
import chardet
import requests
if self.browser_steps_get_valid_steps():
raise BrowserStepsInUnsupportedFetcher(url=url)
proxies = {}
# Allows override the proxy on a per-request basis
# https://requests.readthedocs.io/en/latest/user/advanced/#socks
# Should also work with `socks5://user:pass@host:port` type syntax.
if self.proxy_override:
proxies = {'http': self.proxy_override, 'https': self.proxy_override, 'ftp': self.proxy_override}
else:
if self.system_http_proxy:
proxies['http'] = self.system_http_proxy
if self.system_https_proxy:
proxies['https'] = self.system_https_proxy
session = requests.Session()
if strtobool(os.getenv('ALLOW_FILE_URI', 'false')) and url.startswith('file://'):
from requests_file import FileAdapter
session.mount('file://', FileAdapter())
r = session.request(method=request_method,
data=request_body.encode('utf-8') if type(request_body) is str else request_body,
url=url,
headers=request_headers,
timeout=timeout,
proxies=proxies,
verify=False)
# If the response did not tell us what encoding format to expect, Then use chardet to override what `requests` thinks.
# For example - some sites don't tell us it's utf-8, but return utf-8 content
# This seems to not occur when using webdriver/selenium, it seems to detect the text encoding more reliably.
# https://github.com/psf/requests/issues/1604 good info about requests encoding detection
if not is_binary:
# Don't run this for PDF (and requests identified as binary) takes a _long_ time
if not r.headers.get('content-type') or not 'charset=' in r.headers.get('content-type'):
encoding = chardet.detect(r.content)['encoding']
if encoding:
r.encoding = encoding
self.headers = r.headers
if not r.content or not len(r.content):
logger.debug(f"Requests returned empty content for '{url}'")
if not empty_pages_are_a_change:
raise EmptyReply(url=url, status_code=r.status_code)
else:
logger.debug(f"URL {url} gave zero byte content reply with Status Code {r.status_code}, but empty_pages_are_a_change = True")
# @todo test this
# @todo maybe you really want to test zero-byte return pages?
if r.status_code != 200 and not ignore_status_codes:
# maybe check with content works?
raise Non200ErrorCodeReceived(url=url, status_code=r.status_code, page_html=r.text)
self.status_code = r.status_code
if is_binary:
# Binary files just return their checksum until we add something smarter
self.content = hashlib.md5(r.content).hexdigest()
else:
self.content = r.text
self.raw_content = r.content

View File

@@ -0,0 +1 @@
# resources for browser injection/scraping

View File

@@ -0,0 +1,190 @@
module.exports = async ({page, context}) => {
var {
url,
execute_js,
user_agent,
extra_wait_ms,
req_headers,
include_filters,
xpath_element_js,
screenshot_quality,
proxy_username,
proxy_password,
disk_cache_dir,
no_cache_list,
block_url_list,
} = context;
await page.setBypassCSP(true)
await page.setExtraHTTPHeaders(req_headers);
if (user_agent) {
await page.setUserAgent(user_agent);
}
// https://ourcodeworld.com/articles/read/1106/how-to-solve-puppeteer-timeouterror-navigation-timeout-of-30000-ms-exceeded
await page.setDefaultNavigationTimeout(0);
if (proxy_username) {
// Setting Proxy-Authentication header is deprecated, and doing so can trigger header change errors from Puppeteer
// https://github.com/puppeteer/puppeteer/issues/676 ?
// https://help.brightdata.com/hc/en-us/articles/12632549957649-Proxy-Manager-How-to-Guides#h_01HAKWR4Q0AFS8RZTNYWRDFJC2
// https://cri.dev/posts/2020-03-30-How-to-solve-Puppeteer-Chrome-Error-ERR_INVALID_ARGUMENT/
await page.authenticate({
username: proxy_username,
password: proxy_password
});
}
await page.setViewport({
width: 1024,
height: 768,
deviceScaleFactor: 1,
});
await page.setRequestInterception(true);
if (disk_cache_dir) {
console.log(">>>>>>>>>>>>>>> LOCAL DISK CACHE ENABLED <<<<<<<<<<<<<<<<<<<<<");
}
const fs = require('fs');
const crypto = require('crypto');
function file_is_expired(file_path) {
if (!fs.existsSync(file_path)) {
return true;
}
var stats = fs.statSync(file_path);
const now_date = new Date();
const expire_seconds = 300;
if ((now_date / 1000) - (stats.mtime.getTime() / 1000) > expire_seconds) {
console.log("CACHE EXPIRED: " + file_path);
return true;
}
return false;
}
page.on('request', async (request) => {
// General blocking of requests that waste traffic
if (block_url_list.some(substring => request.url().toLowerCase().includes(substring))) return request.abort();
if (disk_cache_dir) {
const url = request.url();
const key = crypto.createHash('md5').update(url).digest("hex");
const dir_path = disk_cache_dir + key.slice(0, 1) + '/' + key.slice(1, 2) + '/' + key.slice(2, 3) + '/';
// https://stackoverflow.com/questions/4482686/check-synchronously-if-file-directory-exists-in-node-js
if (fs.existsSync(dir_path + key)) {
console.log("* CACHE HIT , using - " + dir_path + key + " - " + url);
const cached_data = fs.readFileSync(dir_path + key);
// @todo headers can come from dir_path+key+".meta" json file
request.respond({
status: 200,
//contentType: 'text/html', //@todo
body: cached_data
});
return;
}
}
request.continue();
});
if (disk_cache_dir) {
page.on('response', async (response) => {
const url = response.url();
// Basic filtering for sane responses
if (response.request().method() != 'GET' || response.request().resourceType() == 'xhr' || response.request().resourceType() == 'document' || response.status() != 200) {
console.log("Skipping (not useful) - Status:" + response.status() + " Method:" + response.request().method() + " ResourceType:" + response.request().resourceType() + " " + url);
return;
}
if (no_cache_list.some(substring => url.toLowerCase().includes(substring))) {
console.log("Skipping (no_cache_list) - " + url);
return;
}
if (url.toLowerCase().includes('data:')) {
console.log("Skipping (embedded-data) - " + url);
return;
}
response.buffer().then(buffer => {
if (buffer.length > 100) {
console.log("Cache - Saving " + response.request().method() + " - " + url + " - " + response.request().resourceType());
const key = crypto.createHash('md5').update(url).digest("hex");
const dir_path = disk_cache_dir + key.slice(0, 1) + '/' + key.slice(1, 2) + '/' + key.slice(2, 3) + '/';
if (!fs.existsSync(dir_path)) {
fs.mkdirSync(dir_path, {recursive: true})
}
if (fs.existsSync(dir_path + key)) {
if (file_is_expired(dir_path + key)) {
fs.writeFileSync(dir_path + key, buffer);
}
} else {
fs.writeFileSync(dir_path + key, buffer);
}
}
});
});
}
const r = await page.goto(url, {
waitUntil: 'load'
});
await page.waitForTimeout(1000);
await page.waitForTimeout(extra_wait_ms);
if (execute_js) {
await page.evaluate(execute_js);
await page.waitForTimeout(200);
}
var xpath_data;
var instock_data;
try {
// Not sure the best way here, in the future this should be a new package added to npm then run in evaluatedCode
// (Once the old playwright is removed)
xpath_data = await page.evaluate((include_filters) => {%xpath_scrape_code%}, include_filters);
instock_data = await page.evaluate(() => {%instock_scrape_code%});
} catch (e) {
console.log(e);
}
// Protocol error (Page.captureScreenshot): Cannot take screenshot with 0 width can come from a proxy auth failure
// Wrap it here (for now)
var b64s = false;
try {
b64s = await page.screenshot({encoding: "base64", fullPage: true, quality: screenshot_quality, type: 'jpeg'});
} catch (e) {
console.log(e);
}
// May fail on very large pages with 'WARNING: tile memory limits exceeded, some content may not draw'
if (!b64s) {
// @todo after text extract, we can place some overlay text with red background to say 'croppped'
console.error('ERROR: content-fetcher page was maybe too large for a screenshot, reverting to viewport only screenshot');
try {
b64s = await page.screenshot({encoding: "base64", quality: screenshot_quality, type: 'jpeg'});
} catch (e) {
console.log(e);
}
}
var html = await page.content();
return {
data: {
'content': html,
'headers': r.headers(),
'instock_data': instock_data,
'screenshot': b64s,
'status_code': r.status(),
'xpath_data': xpath_data
},
type: 'application/json',
};
};

View File

@@ -0,0 +1,216 @@
// Restock Detector
// (c) Leigh Morresi dgtlmoon@gmail.com
//
// Assumes the product is in stock to begin with, unless the following appears above the fold ;
// - outOfStockTexts appears above the fold (out of stock)
// - negateOutOfStockRegex (really is in stock)
function isItemInStock() {
// @todo Pass these in so the same list can be used in non-JS fetchers
const outOfStockTexts = [
' أخبرني عندما يتوفر',
'0 in stock',
'actuellement indisponible',
'agotado',
'article épuisé',
'artikel zurzeit vergriffen',
'as soon as stock is available',
'ausverkauft', // sold out
'available for back order',
'awaiting stock',
'back in stock soon',
'back-order or out of stock',
'backordered',
'benachrichtigt mich', // notify me
'brak na stanie',
'brak w magazynie',
'coming soon',
'currently have any tickets for this',
'currently unavailable',
'dieser artikel ist bald wieder verfügbar',
'dostępne wkrótce',
'en rupture de stock',
'esgotado',
'indisponível',
'isn\'t in stock right now',
'isnt in stock right now',
'isnt in stock right now',
'item is no longer available',
'let me know when it\'s available',
'mail me when available',
'message if back in stock',
'mevcut değil',
'nachricht bei',
'nicht auf lager',
'nicht lagernd',
'nicht lieferbar',
'nicht verfügbar',
'nicht vorrätig',
'nicht zur verfügung',
'nie znaleziono produktów',
'niet beschikbaar',
'niet leverbaar',
'niet op voorraad',
'no disponible',
'no longer in stock',
'no tickets available',
'not available',
'not currently available',
'not in stock',
'notify me when available',
'notify me',
'notify when available',
'não disponível',
'não estamos a aceitar encomendas',
'out of stock',
'out-of-stock',
'prodotto esaurito',
'produkt niedostępny',
'sold out',
'sold-out',
'stokta yok',
'temporarily out of stock',
'temporarily unavailable',
'there were no search results for',
'this item is currently unavailable',
'tickets unavailable',
'tijdelijk uitverkocht',
'tükendi',
'unavailable nearby',
'unavailable tickets',
'vergriffen',
'vorbestellen',
'vorbestellung ist bald möglich',
'we don\'t currently have any',
'we couldn\'t find any products that match',
'we do not currently have an estimate of when this product will be back in stock.',
'we don\'t know when or if this item will be back in stock.',
'we were not able to find a match',
'when this arrives in stock',
'zur zeit nicht an lager',
'品切れ',
'已售',
'已售完',
'품절'
];
const vh = Math.max(document.documentElement.clientHeight || 0, window.innerHeight || 0);
function getElementBaseText(element) {
// .textContent can include text from children which may give the wrong results
// scan only immediate TEXT_NODEs, which will be a child of the element
var text = "";
for (var i = 0; i < element.childNodes.length; ++i)
if (element.childNodes[i].nodeType === Node.TEXT_NODE)
text += element.childNodes[i].textContent;
return text.toLowerCase().trim();
}
const negateOutOfStockRegex = new RegExp('^([0-9] in stock|add to cart|in stock)', 'ig');
// The out-of-stock or in-stock-text is generally always above-the-fold
// and often below-the-fold is a list of related products that may or may not contain trigger text
// so it's good to filter to just the 'above the fold' elements
// and it should be atleast 100px from the top to ignore items in the toolbar, sometimes menu items like "Coming soon" exist
// @todo - if it's SVG or IMG, go into image diff mode
// %ELEMENTS% replaced at injection time because different interfaces use it with different settings
console.log("Scanning %ELEMENTS%");
function collectVisibleElements(parent, visibleElements) {
if (!parent) return; // Base case: if parent is null or undefined, return
// Add the parent itself to the visible elements array if it's of the specified types
visibleElements.push(parent);
// Iterate over the parent's children
const children = parent.children;
for (let i = 0; i < children.length; i++) {
const child = children[i];
if (
child.nodeType === Node.ELEMENT_NODE &&
window.getComputedStyle(child).display !== 'none' &&
window.getComputedStyle(child).visibility !== 'hidden' &&
child.offsetWidth >= 0 &&
child.offsetHeight >= 0 &&
window.getComputedStyle(child).contentVisibility !== 'hidden'
) {
// If the child is an element and is visible, recursively collect visible elements
collectVisibleElements(child, visibleElements);
}
}
}
const elementsToScan = [];
collectVisibleElements(document.body, elementsToScan);
var elementText = "";
// REGEXS THAT REALLY MEAN IT'S IN STOCK
for (let i = elementsToScan.length - 1; i >= 0; i--) {
const element = elementsToScan[i];
// outside the 'fold' or some weird text in the heading area
// .getBoundingClientRect() was causing a crash in chrome 119, can only be run on contentVisibility != hidden
if (element.getBoundingClientRect().top + window.scrollY >= vh || element.getBoundingClientRect().top + window.scrollY <= 100) {
continue
}
elementText = "";
try {
if (element.tagName.toLowerCase() === "input") {
elementText = element.value.toLowerCase().trim();
} else {
elementText = getElementBaseText(element);
}
} catch (e) {
console.warn('stock-not-in-stock.js scraper - handling element for gettext failed', e);
}
if (elementText.length) {
// try which ones could mean its in stock
if (negateOutOfStockRegex.test(elementText) && !elementText.includes('(0 products)')) {
console.log(`Negating/overriding 'Out of Stock' back to "Possibly in stock" found "${elementText}"`)
return 'Possibly in stock';
}
}
}
// OTHER STUFF THAT COULD BE THAT IT'S OUT OF STOCK
for (let i = elementsToScan.length - 1; i >= 0; i--) {
const element = elementsToScan[i];
// outside the 'fold' or some weird text in the heading area
// .getBoundingClientRect() was causing a crash in chrome 119, can only be run on contentVisibility != hidden
// Note: theres also an automated test that places the 'out of stock' text fairly low down
if (element.getBoundingClientRect().top + window.scrollY >= vh + 250 || element.getBoundingClientRect().top + window.scrollY <= 100) {
continue
}
elementText = "";
if (element.tagName.toLowerCase() === "input") {
elementText = element.value.toLowerCase().trim();
} else {
elementText = getElementBaseText(element);
}
if (elementText.length) {
// and these mean its out of stock
for (const outOfStockText of outOfStockTexts) {
if (elementText.includes(outOfStockText)) {
console.log(`Selected 'Out of Stock' - found text "${outOfStockText}" - "${elementText}" - offset top ${element.getBoundingClientRect().top}, page height is ${vh}`)
return outOfStockText; // item is out of stock
}
}
}
}
console.log(`Returning 'Possibly in stock' - cant' find any useful matching text`)
return 'Possibly in stock'; // possibly in stock, cant decide otherwise.
}
// returns the element text that makes it think it's out of stock
return isItemInStock().trim()

View File

@@ -0,0 +1,282 @@
// Copyright (C) 2021 Leigh Morresi (dgtlmoon@gmail.com)
// All rights reserved.
// @file Scrape the page looking for elements of concern (%ELEMENTS%)
// http://matatk.agrip.org.uk/tests/position-and-width/
// https://stackoverflow.com/questions/26813480/when-is-element-getboundingclientrect-guaranteed-to-be-updated-accurate
//
// Some pages like https://www.londonstockexchange.com/stock/NCCL/ncondezi-energy-limited/analysis
// will automatically force a scroll somewhere, so include the position offset
// Lets hope the position doesnt change while we iterate the bbox's, but this is better than nothing
var scroll_y = 0;
try {
scroll_y = +document.documentElement.scrollTop || document.body.scrollTop
} catch (e) {
console.log(e);
}
// Include the getXpath script directly, easier than fetching
function getxpath(e) {
var n = e;
if (n && n.id) return '//*[@id="' + n.id + '"]';
for (var o = []; n && Node.ELEMENT_NODE === n.nodeType;) {
for (var i = 0, r = !1, d = n.previousSibling; d;) d.nodeType !== Node.DOCUMENT_TYPE_NODE && d.nodeName === n.nodeName && i++, d = d.previousSibling;
for (d = n.nextSibling; d;) {
if (d.nodeName === n.nodeName) {
r = !0;
break
}
d = d.nextSibling
}
o.push((n.prefix ? n.prefix + ":" : "") + n.localName + (i || r ? "[" + (i + 1) + "]" : "")), n = n.parentNode
}
return o.length ? "/" + o.reverse().join("/") : ""
}
const findUpTag = (el) => {
let r = el
chained_css = [];
depth = 0;
// Strategy 1: If it's an input, with name, and there's only one, prefer that
if (el.name !== undefined && el.name.length) {
var proposed = el.tagName + "[name=" + el.name + "]";
var proposed_element = window.document.querySelectorAll(proposed);
if (proposed_element.length) {
if (proposed_element.length === 1) {
return proposed;
} else {
// Some sites change ID but name= stays the same, we can hit it if we know the index
// Find all the elements that match and work out the input[n]
var n = Array.from(proposed_element).indexOf(el);
// Return a Playwright selector for nthinput[name=zipcode]
return proposed + " >> nth=" + n;
}
}
}
// Strategy 2: Keep going up until we hit an ID tag, imagine it's like #list-widget div h4
while (r.parentNode) {
if (depth === 5) {
break;
}
if ('' !== r.id) {
chained_css.unshift("#" + CSS.escape(r.id));
final_selector = chained_css.join(' > ');
// Be sure theres only one, some sites have multiples of the same ID tag :-(
if (window.document.querySelectorAll(final_selector).length === 1) {
return final_selector;
}
return null;
} else {
chained_css.unshift(r.tagName.toLowerCase());
}
r = r.parentNode;
depth += 1;
}
return null;
}
// @todo - if it's SVG or IMG, go into image diff mode
// %ELEMENTS% replaced at injection time because different interfaces use it with different settings
var size_pos = [];
// after page fetch, inject this JS
// build a map of all elements and their positions (maybe that only include text?)
var bbox;
console.log("Scanning %ELEMENTS%");
function collectVisibleElements(parent, visibleElements) {
if (!parent) return; // Base case: if parent is null or undefined, return
// Add the parent itself to the visible elements array if it's of the specified types
const tagName = parent.tagName.toLowerCase();
if ("%ELEMENTS%".split(',').includes(tagName)) {
visibleElements.push(parent);
}
// Iterate over the parent's children
const children = parent.children;
for (let i = 0; i < children.length; i++) {
const child = children[i];
if (
child.nodeType === Node.ELEMENT_NODE &&
window.getComputedStyle(child).display !== 'none' &&
window.getComputedStyle(child).visibility !== 'hidden' &&
child.offsetWidth >= 0 &&
child.offsetHeight >= 0 &&
window.getComputedStyle(child).contentVisibility !== 'hidden'
) {
// If the child is an element and is visible, recursively collect visible elements
collectVisibleElements(child, visibleElements);
}
}
}
// Create an array to hold the visible elements
const visibleElementsArray = [];
// Call collectVisibleElements with the starting parent element
collectVisibleElements(document.body, visibleElementsArray);
visibleElementsArray.forEach(function (element) {
bbox = element.getBoundingClientRect();
// Skip really small ones, and where width or height ==0
if (bbox['width'] * bbox['height'] < 10) {
return
}
// Don't include elements that are offset from canvas
if (bbox['top'] + scroll_y < 0 || bbox['left'] < 0) {
return
}
// @todo the getXpath kind of sucks, it doesnt know when there is for example just one ID sometimes
// it should not traverse when we know we can anchor off just an ID one level up etc..
// maybe, get current class or id, keep traversing up looking for only class or id until there is just one match
// 1st primitive - if it has class, try joining it all and select, if theres only one.. well thats us.
xpath_result = false;
try {
var d = findUpTag(element);
if (d) {
xpath_result = d;
}
} catch (e) {
console.log(e);
}
// You could swap it and default to getXpath and then try the smarter one
// default back to the less intelligent one
if (!xpath_result) {
try {
// I've seen on FB and eBay that this doesnt work
// ReferenceError: getXPath is not defined at eval (eval at evaluate (:152:29), <anonymous>:67:20) at UtilityScript.evaluate (<anonymous>:159:18) at UtilityScript.<anonymous> (<anonymous>:1:44)
xpath_result = getxpath(element);
} catch (e) {
console.log(e);
return
}
}
let label = "not-interesting" // A placeholder, the actual labels for training are done by hand for now
let text = element.textContent.trim().slice(0, 30).trim();
while (/\n{2,}|\t{2,}/.test(text)) {
text = text.replace(/\n{2,}/g, '\n').replace(/\t{2,}/g, '\t')
}
// Try to identify any possible currency amounts "Sale: 4000" or "Sale now 3000 Kc", can help with the training.
const hasDigitCurrency = (/\d/.test(text.slice(0, 6)) || /\d/.test(text.slice(-6)) ) && /([€£$¥₩₹]|USD|AUD|EUR|Kč|kr|SEK|,)/.test(text) ;
size_pos.push({
xpath: xpath_result,
width: Math.round(bbox['width']),
height: Math.round(bbox['height']),
left: Math.floor(bbox['left']),
top: Math.floor(bbox['top']) + scroll_y,
// tagName used by Browser Steps
tagName: (element.tagName) ? element.tagName.toLowerCase() : '',
// tagtype used by Browser Steps
tagtype: (element.tagName.toLowerCase() === 'input' && element.type) ? element.type.toLowerCase() : '',
isClickable: window.getComputedStyle(element).cursor === "pointer",
// Used by the keras trainer
fontSize: window.getComputedStyle(element).getPropertyValue('font-size'),
fontWeight: window.getComputedStyle(element).getPropertyValue('font-weight'),
hasDigitCurrency: hasDigitCurrency,
label: label,
});
});
// Inject the current one set in the include_filters, which may be a CSS rule
// used for displaying the current one in VisualSelector, where its not one we generated.
if (include_filters.length) {
let results;
// Foreach filter, go and find it on the page and add it to the results so we can visualise it again
for (const f of include_filters) {
bbox = false;
q = false;
if (!f.length) {
console.log("xpath_element_scraper: Empty filter, skipping");
continue;
}
try {
// is it xpath?
if (f.startsWith('/') || f.startsWith('xpath')) {
var qry_f = f.replace(/xpath(:|\d:)/, '')
console.log("[xpath] Scanning for included filter " + qry_f)
let xpathResult = document.evaluate(qry_f, document, null, XPathResult.ORDERED_NODE_SNAPSHOT_TYPE, null);
results = [];
for (let i = 0; i < xpathResult.snapshotLength; i++) {
results.push(xpathResult.snapshotItem(i));
}
} else {
console.log("[css] Scanning for included filter " + f)
console.log("[css] Scanning for included filter " + f);
results = document.querySelectorAll(f);
}
} catch (e) {
// Maybe catch DOMException and alert?
console.log("xpath_element_scraper: Exception selecting element from filter " + f);
console.log(e);
}
if (results != null && results.length) {
// Iterate over the results
results.forEach(node => {
// Try to resolve //something/text() back to its /something so we can atleast get the bounding box
try {
if (typeof node.nodeName == 'string' && node.nodeName === '#text') {
node = node.parentElement
}
} catch (e) {
console.log(e)
console.log("xpath_element_scraper: #text resolver")
}
// #1231 - IN the case XPath attribute filter is applied, we will have to traverse up and find the element.
if (typeof node.getBoundingClientRect == 'function') {
bbox = node.getBoundingClientRect();
console.log("xpath_element_scraper: Got filter element, scroll from top was " + scroll_y)
} else {
try {
// Try and see we can find its ownerElement
bbox = node.ownerElement.getBoundingClientRect();
console.log("xpath_element_scraper: Got filter by ownerElement element, scroll from top was " + scroll_y)
} catch (e) {
console.log(e)
console.log("xpath_element_scraper: error looking up q.ownerElement")
}
}
if (bbox && bbox['width'] > 0 && bbox['height'] > 0) {
size_pos.push({
xpath: f,
width: parseInt(bbox['width']),
height: parseInt(bbox['height']),
left: parseInt(bbox['left']),
top: parseInt(bbox['top']) + scroll_y,
highlight_as_custom_filter: true
});
}
});
}
}
}
// Sort the elements so we find the smallest one first, in other words, we find the smallest one matching in that area
// so that we dont select the wrapping element by mistake and be unable to select what we want
size_pos.sort((a, b) => (a.width * a.height > b.width * b.height) ? 1 : -1)
// Window.width required for proper scaling in the frontend
return {'size_pos': size_pos, 'browser_width': window.innerWidth};

View File

@@ -0,0 +1,120 @@
import os
import time
from loguru import logger
from changedetectionio.content_fetchers.base import Fetcher
class fetcher(Fetcher):
if os.getenv("WEBDRIVER_URL"):
fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL"))
else:
fetcher_description = "WebDriver Chrome/Javascript"
# Configs for Proxy setup
# In the ENV vars, is prefixed with "webdriver_", so it is for example "webdriver_sslProxy"
selenium_proxy_settings_mappings = ['proxyType', 'ftpProxy', 'httpProxy', 'noProxy',
'proxyAutoconfigUrl', 'sslProxy', 'autodetect',
'socksProxy', 'socksVersion', 'socksUsername', 'socksPassword']
proxy = None
def __init__(self, proxy_override=None, custom_browser_connection_url=None):
super().__init__()
from selenium.webdriver.common.proxy import Proxy as SeleniumProxy
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
if not custom_browser_connection_url:
self.browser_connection_url = os.getenv("WEBDRIVER_URL", 'http://browser-chrome:4444/wd/hub').strip('"')
else:
self.browser_connection_is_custom = True
self.browser_connection_url = custom_browser_connection_url
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.selenium_proxy_settings_mappings:
v = os.getenv('webdriver_' + k, False)
if v:
proxy_args[k] = v.strip('"')
# Map back standard HTTP_ and HTTPS_PROXY to webDriver httpProxy/sslProxy
if not proxy_args.get('webdriver_httpProxy') and self.system_http_proxy:
proxy_args['httpProxy'] = self.system_http_proxy
if not proxy_args.get('webdriver_sslProxy') and self.system_https_proxy:
proxy_args['httpsProxy'] = self.system_https_proxy
# Allows override the proxy on a per-request basis
if proxy_override is not None:
proxy_args['httpProxy'] = proxy_override
if proxy_args:
self.proxy = SeleniumProxy(raw=proxy_args)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False,
empty_pages_are_a_change=False):
from selenium import webdriver
from selenium.webdriver.chrome.options import Options as ChromeOptions
from selenium.common.exceptions import WebDriverException
# request_body, request_method unused for now, until some magic in the future happens.
options = ChromeOptions()
if self.proxy:
options.proxy = self.proxy
self.driver = webdriver.Remote(
command_executor=self.browser_connection_url,
options=options)
try:
self.driver.get(url)
except WebDriverException as e:
# Be sure we close the session window
self.quit()
raise
self.driver.set_window_size(1280, 1024)
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
if self.webdriver_js_execute_code is not None:
self.driver.execute_script(self.webdriver_js_execute_code)
# Selenium doesn't automatically wait for actions as good as Playwright, so wait again
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
# @todo - how to check this? is it possible?
self.status_code = 200
# @todo somehow we should try to get this working for WebDriver
# raise EmptyReply(url=url, status_code=r.status_code)
# @todo - dom wait loaded?
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
self.content = self.driver.page_source
self.headers = {}
self.screenshot = self.driver.get_screenshot_as_png()
# Does the connection to the webdriver work? run a test connection.
def is_ready(self):
from selenium import webdriver
from selenium.webdriver.chrome.options import Options as ChromeOptions
self.driver = webdriver.Remote(
command_executor=self.command_executor,
options=ChromeOptions())
# driver.quit() seems to cause better exceptions
self.quit()
return True
def quit(self):
if self.driver:
try:
self.driver.quit()
except Exception as e:
logger.debug(f"Content Fetcher > Exception in chrome shutdown/quit {str(e)}")

113
changedetectionio/diff.py Normal file
View File

@@ -0,0 +1,113 @@
import difflib
from typing import List, Iterator, Union
REMOVED_STYLE = "background-color: #fadad7; color: #b30000;"
ADDED_STYLE = "background-color: #eaf2c2; color: #406619;"
def same_slicer(lst: List[str], start: int, end: int) -> List[str]:
"""Return a slice of the list, or a single element if start == end."""
return lst[start:end] if start != end else [lst[start]]
def customSequenceMatcher(
before: List[str],
after: List[str],
include_equal: bool = False,
include_removed: bool = True,
include_added: bool = True,
include_replaced: bool = True,
include_change_type_prefix: bool = True,
html_colour: bool = False
) -> Iterator[List[str]]:
"""
Compare two sequences and yield differences based on specified parameters.
Args:
before (List[str]): Original sequence
after (List[str]): Modified sequence
include_equal (bool): Include unchanged parts
include_removed (bool): Include removed parts
include_added (bool): Include added parts
include_replaced (bool): Include replaced parts
include_change_type_prefix (bool): Add prefixes to indicate change types
html_colour (bool): Use HTML background colors for differences
Yields:
List[str]: Differences between sequences
"""
cruncher = difflib.SequenceMatcher(isjunk=lambda x: x in " \t", a=before, b=after)
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
if include_equal and tag == 'equal':
yield before[alo:ahi]
elif include_removed and tag == 'delete':
if html_colour:
yield [f'<span class="cdio" style="{REMOVED_STYLE}">{line}</span>' for line in same_slicer(before, alo, ahi)]
else:
yield [f"(removed) {line}" for line in same_slicer(before, alo, ahi)] if include_change_type_prefix else same_slicer(before, alo, ahi)
elif include_replaced and tag == 'replace':
if html_colour:
yield [f'<span class="cdio" style="{REMOVED_STYLE}">{line}</span>' for line in same_slicer(before, alo, ahi)] + \
[f'<span class="cdio" style="{ADDED_STYLE}">{line}</span>' for line in same_slicer(after, blo, bhi)]
else:
yield [f"(changed) {line}" for line in same_slicer(before, alo, ahi)] + \
[f"(into) {line}" for line in same_slicer(after, blo, bhi)] if include_change_type_prefix else same_slicer(before, alo, ahi) + same_slicer(after, blo, bhi)
elif include_added and tag == 'insert':
if html_colour:
yield [f'<span class="cdio" style="{ADDED_STYLE}">{line}</span>' for line in same_slicer(after, blo, bhi)]
else:
yield [f"(added) {line}" for line in same_slicer(after, blo, bhi)] if include_change_type_prefix else same_slicer(after, blo, bhi)
def render_diff(
previous_version_file_contents: str,
newest_version_file_contents: str,
include_equal: bool = False,
include_removed: bool = True,
include_added: bool = True,
include_replaced: bool = True,
line_feed_sep: str = "\n",
include_change_type_prefix: bool = True,
patch_format: bool = False,
html_colour: bool = False
) -> str:
"""
Render the difference between two file contents.
Args:
previous_version_file_contents (str): Original file contents
newest_version_file_contents (str): Modified file contents
include_equal (bool): Include unchanged parts
include_removed (bool): Include removed parts
include_added (bool): Include added parts
include_replaced (bool): Include replaced parts
line_feed_sep (str): Separator for lines in output
include_change_type_prefix (bool): Add prefixes to indicate change types
patch_format (bool): Use patch format for output
html_colour (bool): Use HTML background colors for differences
Returns:
str: Rendered difference
"""
newest_lines = [line.rstrip() for line in newest_version_file_contents.splitlines()]
previous_lines = [line.rstrip() for line in previous_version_file_contents.splitlines()] if previous_version_file_contents else []
if patch_format:
patch = difflib.unified_diff(previous_lines, newest_lines)
return line_feed_sep.join(patch)
rendered_diff = customSequenceMatcher(
before=previous_lines,
after=newest_lines,
include_equal=include_equal,
include_removed=include_removed,
include_added=include_added,
include_replaced=include_replaced,
include_change_type_prefix=include_change_type_prefix,
html_colour=html_colour
)
def flatten(lst: List[Union[str, List[str]]]) -> str:
return line_feed_sep.join(flatten(x) if isinstance(x, list) else x for x in lst)
return flatten(rendered_diff)

File diff suppressed because it is too large Load Diff

762
changedetectionio/forms.py Normal file
View File

@@ -0,0 +1,762 @@
import os
import re
from loguru import logger
from wtforms.widgets.core import TimeInput
from changedetectionio.strtobool import strtobool
from wtforms import (
BooleanField,
Form,
Field,
IntegerField,
RadioField,
SelectField,
StringField,
SubmitField,
TextAreaField,
fields,
validators,
widgets
)
from flask_wtf.file import FileField, FileAllowed
from wtforms.fields import FieldList
from wtforms.validators import ValidationError
from validators.url import url as url_validator
# default
# each select <option data-enabled="enabled-0-0"
from changedetectionio.blueprint.browser_steps.browser_steps import browser_step_ui_config
from changedetectionio import html_tools, content_fetchers
from changedetectionio.notification import (
valid_notification_formats,
)
from wtforms.fields import FormField
dictfilt = lambda x, y: dict([ (i,x[i]) for i in x if i in set(y) ])
valid_method = {
'GET',
'POST',
'PUT',
'PATCH',
'DELETE',
'OPTIONS',
}
default_method = 'GET'
allow_simplehost = not strtobool(os.getenv('BLOCK_SIMPLEHOSTS', 'False'))
class StringListField(StringField):
widget = widgets.TextArea()
def _value(self):
if self.data:
# ignore empty lines in the storage
data = list(filter(lambda x: len(x.strip()), self.data))
# Apply strip to each line
data = list(map(lambda x: x.strip(), data))
return "\r\n".join(data)
else:
return u''
# incoming
def process_formdata(self, valuelist):
if valuelist and len(valuelist[0].strip()):
# Remove empty strings, stripping and splitting \r\n, only \n etc.
self.data = valuelist[0].splitlines()
# Remove empty lines from the final data
self.data = list(filter(lambda x: len(x.strip()), self.data))
else:
self.data = []
class SaltyPasswordField(StringField):
widget = widgets.PasswordInput()
encrypted_password = ""
def build_password(self, password):
import base64
import hashlib
import secrets
# Make a new salt on every new password and store it with the password
salt = secrets.token_bytes(32)
key = hashlib.pbkdf2_hmac('sha256', password.encode('utf-8'), salt, 100000)
store = base64.b64encode(salt + key).decode('ascii')
return store
# incoming
def process_formdata(self, valuelist):
if valuelist:
# Be really sure it's non-zero in length
if len(valuelist[0].strip()) > 0:
self.encrypted_password = self.build_password(valuelist[0])
self.data = ""
else:
self.data = False
class StringTagUUID(StringField):
# process_formdata(self, valuelist) handled manually in POST handler
# Is what is shown when field <input> is rendered
def _value(self):
# Tag UUID to name, on submit it will convert it back (in the submit handler of init.py)
if self.data and type(self.data) is list:
tag_titles = []
for i in self.data:
tag = self.datastore.data['settings']['application']['tags'].get(i)
if tag:
tag_title = tag.get('title')
if tag_title:
tag_titles.append(tag_title)
return ', '.join(tag_titles)
if not self.data:
return ''
return 'error'
class TimeDurationForm(Form):
hours = SelectField(choices=[(f"{i}", f"{i}") for i in range(0, 25)], default="24", validators=[validators.Optional()])
minutes = SelectField(choices=[(f"{i}", f"{i}") for i in range(0, 60)], default="00", validators=[validators.Optional()])
class TimeStringField(Field):
"""
A WTForms field for time inputs (HH:MM) that stores the value as a string.
"""
widget = TimeInput() # Use the built-in time input widget
def _value(self):
"""
Returns the value for rendering in the form.
"""
return self.data if self.data is not None else ""
def process_formdata(self, valuelist):
"""
Processes the raw input from the form and stores it as a string.
"""
if valuelist:
time_str = valuelist[0]
# Simple validation for HH:MM format
if not time_str or len(time_str.split(":")) != 2:
raise ValidationError("Invalid time format. Use HH:MM.")
self.data = time_str
class validateTimeZoneName(object):
"""
Flask wtform validators wont work with basic auth
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
from zoneinfo import available_timezones
python_timezones = available_timezones()
if field.data and field.data not in python_timezones:
raise ValidationError("Not a valid timezone name")
class ScheduleLimitDaySubForm(Form):
enabled = BooleanField("not set", default=True)
start_time = TimeStringField("Start At", default="00:00", render_kw={"placeholder": "HH:MM"}, validators=[validators.Optional()])
duration = FormField(TimeDurationForm, label="Run duration")
class ScheduleLimitForm(Form):
enabled = BooleanField("Use time scheduler", default=False)
# Because the label for=""" doesnt line up/work with the actual checkbox
monday = FormField(ScheduleLimitDaySubForm, label="")
tuesday = FormField(ScheduleLimitDaySubForm, label="")
wednesday = FormField(ScheduleLimitDaySubForm, label="")
thursday = FormField(ScheduleLimitDaySubForm, label="")
friday = FormField(ScheduleLimitDaySubForm, label="")
saturday = FormField(ScheduleLimitDaySubForm, label="")
sunday = FormField(ScheduleLimitDaySubForm, label="")
timezone = StringField("Optional timezone to run in",
render_kw={"list": "timezones"},
validators=[validateTimeZoneName()]
)
def __init__(
self,
formdata=None,
obj=None,
prefix="",
data=None,
meta=None,
**kwargs,
):
super().__init__(formdata, obj, prefix, data, meta, **kwargs)
self.monday.form.enabled.label.text="Monday"
self.tuesday.form.enabled.label.text = "Tuesday"
self.wednesday.form.enabled.label.text = "Wednesday"
self.thursday.form.enabled.label.text = "Thursday"
self.friday.form.enabled.label.text = "Friday"
self.saturday.form.enabled.label.text = "Saturday"
self.sunday.form.enabled.label.text = "Sunday"
class TimeBetweenCheckForm(Form):
weeks = IntegerField('Weeks', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
days = IntegerField('Days', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
hours = IntegerField('Hours', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
minutes = IntegerField('Minutes', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
seconds = IntegerField('Seconds', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
# @todo add total seconds minimum validatior = minimum_seconds_recheck_time
# Separated by key:value
class StringDictKeyValue(StringField):
widget = widgets.TextArea()
def _value(self):
if self.data:
output = u''
for k in self.data.keys():
output += "{}: {}\r\n".format(k, self.data[k])
return output
else:
return u''
# incoming
def process_formdata(self, valuelist):
if valuelist:
self.data = {}
# Remove empty strings
cleaned = list(filter(None, valuelist[0].split("\n")))
for s in cleaned:
parts = s.strip().split(':', 1)
if len(parts) == 2:
self.data.update({parts[0].strip(): parts[1].strip()})
else:
self.data = {}
class ValidateContentFetcherIsReady(object):
"""
Validates that anything that looks like a regex passes as a regex
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
return
# AttributeError: module 'changedetectionio.content_fetcher' has no attribute 'extra_browser_unlocked<>ASDF213r123r'
# Better would be a radiohandler that keeps a reference to each class
# if field.data is not None and field.data != 'system':
# klass = getattr(content_fetcher, field.data)
# some_object = klass()
# try:
# ready = some_object.is_ready()
#
# except urllib3.exceptions.MaxRetryError as e:
# driver_url = some_object.command_executor
# message = field.gettext('Content fetcher \'%s\' did not respond.' % (field.data))
# message += '<br>' + field.gettext(
# 'Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.')
# message += '<br>' + field.gettext('Did you follow the instructions in the wiki?')
# message += '<br><br>' + field.gettext('WebDriver Host: %s' % (driver_url))
# message += '<br><a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">Go here for more information</a>'
# message += '<br>'+field.gettext('Content fetcher did not respond properly, unable to use it.\n %s' % (str(e)))
#
# raise ValidationError(message)
#
# except Exception as e:
# message = field.gettext('Content fetcher \'%s\' did not respond properly, unable to use it.\n %s')
# raise ValidationError(message % (field.data, e))
class ValidateNotificationBodyAndTitleWhenURLisSet(object):
"""
Validates that they entered something in both notification title+body when the URL is set
Due to https://github.com/dgtlmoon/changedetection.io/issues/360
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
if len(field.data):
if not len(form.notification_title.data) or not len(form.notification_body.data):
message = field.gettext('Notification Body and Title is required when a Notification URL is used')
raise ValidationError(message)
class ValidateAppRiseServers(object):
"""
Validates that each URL given is compatible with AppRise
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
import apprise
apobj = apprise.Apprise()
# so that the custom endpoints are registered
from changedetectionio.apprise_plugin import apprise_custom_api_call_wrapper
for server_url in field.data:
url = server_url.strip()
if url.startswith("#"):
continue
if not apobj.add(url):
message = field.gettext('\'%s\' is not a valid AppRise URL.' % (url))
raise ValidationError(message)
class ValidateJinja2Template(object):
"""
Validates that a {token} is from a valid set
"""
def __call__(self, form, field):
from changedetectionio import notification
from jinja2 import BaseLoader, TemplateSyntaxError, UndefinedError
from jinja2.sandbox import ImmutableSandboxedEnvironment
from jinja2.meta import find_undeclared_variables
import jinja2.exceptions
# Might be a list of text, or might be just text (like from the apprise url list)
joined_data = ' '.join(map(str, field.data)) if isinstance(field.data, list) else f"{field.data}"
try:
jinja2_env = ImmutableSandboxedEnvironment(loader=BaseLoader)
jinja2_env.globals.update(notification.valid_tokens)
# Extra validation tokens provided on the form_class(... extra_tokens={}) setup
if hasattr(field, 'extra_notification_tokens'):
jinja2_env.globals.update(field.extra_notification_tokens)
jinja2_env.from_string(joined_data).render()
except TemplateSyntaxError as e:
raise ValidationError(f"This is not a valid Jinja2 template: {e}") from e
except UndefinedError as e:
raise ValidationError(f"A variable or function is not defined: {e}") from e
except jinja2.exceptions.SecurityError as e:
raise ValidationError(f"This is not a valid Jinja2 template: {e}") from e
ast = jinja2_env.parse(joined_data)
undefined = ", ".join(find_undeclared_variables(ast))
if undefined:
raise ValidationError(
f"The following tokens used in the notification are not valid: {undefined}"
)
class validateURL(object):
"""
Flask wtform validators wont work with basic auth
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
# This should raise a ValidationError() or not
validate_url(field.data)
def validate_url(test_url):
# If hosts that only contain alphanumerics are allowed ("localhost" for example)
try:
url_validator(test_url, simple_host=allow_simplehost)
except validators.ValidationError:
#@todo check for xss
message = f"'{test_url}' is not a valid URL."
# This should be wtforms.validators.
raise ValidationError(message)
from .model.Watch import is_safe_url
if not is_safe_url(test_url):
# This should be wtforms.validators.
raise ValidationError('Watch protocol is not permitted by SAFE_PROTOCOL_REGEX or incorrect URL format')
class ValidateListRegex(object):
"""
Validates that anything that looks like a regex passes as a regex
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
for line in field.data:
if re.search(html_tools.PERL_STYLE_REGEX, line, re.IGNORECASE):
try:
regex = html_tools.perl_style_slash_enclosed_regex_to_options(line)
re.compile(regex)
except re.error:
message = field.gettext('RegEx \'%s\' is not a valid regular expression.')
raise ValidationError(message % (line))
class ValidateCSSJSONXPATHInput(object):
"""
Filter validation
@todo CSS validator ;)
"""
def __init__(self, message=None, allow_xpath=True, allow_json=True):
self.message = message
self.allow_xpath = allow_xpath
self.allow_json = allow_json
def __call__(self, form, field):
if isinstance(field.data, str):
data = [field.data]
else:
data = field.data
for line in data:
# Nothing to see here
if not len(line.strip()):
return
# Does it look like XPath?
if line.strip()[0] == '/' or line.strip().startswith('xpath:'):
if not self.allow_xpath:
raise ValidationError("XPath not permitted in this field!")
from lxml import etree, html
import elementpath
# xpath 2.0-3.1
from elementpath.xpath3 import XPath3Parser
tree = html.fromstring("<html></html>")
line = line.replace('xpath:', '')
try:
elementpath.select(tree, line.strip(), parser=XPath3Parser)
except elementpath.ElementPathError as e:
message = field.gettext('\'%s\' is not a valid XPath expression. (%s)')
raise ValidationError(message % (line, str(e)))
except:
raise ValidationError("A system-error occurred when validating your XPath expression")
if line.strip().startswith('xpath1:'):
if not self.allow_xpath:
raise ValidationError("XPath not permitted in this field!")
from lxml import etree, html
tree = html.fromstring("<html></html>")
line = re.sub(r'^xpath1:', '', line)
try:
tree.xpath(line.strip())
except etree.XPathEvalError as e:
message = field.gettext('\'%s\' is not a valid XPath expression. (%s)')
raise ValidationError(message % (line, str(e)))
except:
raise ValidationError("A system-error occurred when validating your XPath expression")
if 'json:' in line:
if not self.allow_json:
raise ValidationError("JSONPath not permitted in this field!")
from jsonpath_ng.exceptions import (
JsonPathLexerError,
JsonPathParserError,
)
from jsonpath_ng.ext import parse
input = line.replace('json:', '')
try:
parse(input)
except (JsonPathParserError, JsonPathLexerError) as e:
message = field.gettext('\'%s\' is not a valid JSONPath expression. (%s)')
raise ValidationError(message % (input, str(e)))
except:
raise ValidationError("A system-error occurred when validating your JSONPath expression")
# Re #265 - maybe in the future fetch the page and offer a
# warning/notice that its possible the rule doesnt yet match anything?
if not self.allow_json:
raise ValidationError("jq not permitted in this field!")
if 'jq:' in line:
try:
import jq
except ModuleNotFoundError:
# `jq` requires full compilation in windows and so isn't generally available
raise ValidationError("jq not support not found")
input = line.replace('jq:', '')
try:
jq.compile(input)
except (ValueError) as e:
message = field.gettext('\'%s\' is not a valid jq expression. (%s)')
raise ValidationError(message % (input, str(e)))
except:
raise ValidationError("A system-error occurred when validating your jq expression")
class quickWatchForm(Form):
from . import processors
url = fields.URLField('URL', validators=[validateURL()])
tags = StringTagUUID('Group tag', [validators.Optional()])
watch_submit_button = SubmitField('Watch', render_kw={"class": "pure-button pure-button-primary"})
processor = RadioField(u'Processor', choices=processors.available_processors(), default="text_json_diff")
edit_and_watch_submit_button = SubmitField('Edit > Watch', render_kw={"class": "pure-button pure-button-primary"})
# Common to a single watch and the global settings
class commonSettingsForm(Form):
from . import processors
def __init__(self, formdata=None, obj=None, prefix="", data=None, meta=None, **kwargs):
super().__init__(formdata, obj, prefix, data, meta, **kwargs)
self.notification_body.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
self.notification_title.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
self.notification_urls.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False)
fetch_backend = RadioField(u'Fetch Method', choices=content_fetchers.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
notification_body = TextAreaField('Notification Body', default='{{ watch_url }} had a change.', validators=[validators.Optional(), ValidateJinja2Template()])
notification_format = SelectField('Notification format', choices=valid_notification_formats.keys())
notification_title = StringField('Notification Title', default='ChangeDetection.io Notification - {{ watch_url }}', validators=[validators.Optional(), ValidateJinja2Template()])
notification_urls = StringListField('Notification URL List', validators=[validators.Optional(), ValidateAppRiseServers(), ValidateJinja2Template()])
processor = RadioField( label=u"Processor - What do you want to achieve?", choices=processors.available_processors(), default="text_json_diff")
timezone = StringField("Timezone for watch schedule", render_kw={"list": "timezones"}, validators=[validateTimeZoneName()])
webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1, message="Should contain one or more seconds")])
class importForm(Form):
from . import processors
processor = RadioField(u'Processor', choices=processors.available_processors(), default="text_json_diff")
urls = TextAreaField('URLs')
xlsx_file = FileField('Upload .xlsx file', validators=[FileAllowed(['xlsx'], 'Must be .xlsx file!')])
file_mapping = SelectField('File mapping', [validators.DataRequired()], choices={('wachete', 'Wachete mapping'), ('custom','Custom mapping')})
class SingleBrowserStep(Form):
operation = SelectField('Operation', [validators.Optional()], choices=browser_step_ui_config.keys())
# maybe better to set some <script>var..
selector = StringField('Selector', [validators.Optional()], render_kw={"placeholder": "CSS or xPath selector"})
optional_value = StringField('value', [validators.Optional()], render_kw={"placeholder": "Value"})
# @todo move to JS? ajax fetch new field?
# remove_button = SubmitField('-', render_kw={"type": "button", "class": "pure-button pure-button-primary", 'title': 'Remove'})
# add_button = SubmitField('+', render_kw={"type": "button", "class": "pure-button pure-button-primary", 'title': 'Add new step after'})
class processor_text_json_diff_form(commonSettingsForm):
url = fields.URLField('URL', validators=[validateURL()])
tags = StringTagUUID('Group tag', [validators.Optional()], default='')
time_between_check = FormField(TimeBetweenCheckForm)
time_schedule_limit = FormField(ScheduleLimitForm)
time_between_check_use_default = BooleanField('Use global settings for time between check', default=False)
include_filters = StringListField('CSS/JSONPath/JQ/XPath Filters', [ValidateCSSJSONXPATHInput()], default='')
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_json=False)])
extract_text = StringListField('Extract text', [ValidateListRegex()])
title = StringField('Title', default='')
ignore_text = StringListField('Ignore lines containing', [ValidateListRegex()])
headers = StringDictKeyValue('Request headers')
body = TextAreaField('Request body', [validators.Optional()])
method = SelectField('Request method', choices=valid_method, default=default_method)
ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False)
check_unique_lines = BooleanField('Only trigger when unique lines appear in all history', default=False)
remove_duplicate_lines = BooleanField('Remove duplicate lines of text', default=False)
sort_text_alphabetically = BooleanField('Sort text alphabetically', default=False)
trim_text_whitespace = BooleanField('Trim whitespace before and after text', default=False)
filter_text_added = BooleanField('Added lines', default=True)
filter_text_replaced = BooleanField('Replaced/changed lines', default=True)
filter_text_removed = BooleanField('Removed lines', default=True)
trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()])
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
browser_steps = FieldList(FormField(SingleBrowserStep), min_entries=10)
text_should_not_be_present = StringListField('Block change-detection while text matches', [validators.Optional(), ValidateListRegex()])
webdriver_js_execute_code = TextAreaField('Execute JavaScript before change detection', render_kw={"rows": "5"}, validators=[validators.Optional()])
save_button = SubmitField('Save', render_kw={"class": "pure-button button-small pure-button-primary"})
proxy = RadioField('Proxy')
filter_failure_notification_send = BooleanField(
'Send a notification when the filter can no longer be found on the page', default=False)
notification_muted = BooleanField('Notifications Muted / Off', default=False)
notification_screenshot = BooleanField('Attach screenshot to notification (where possible)', default=False)
def extra_tab_content(self):
return None
def extra_form_content(self):
return None
def validate(self, **kwargs):
if not super().validate():
return False
from changedetectionio.safe_jinja import render as jinja_render
result = True
# Fail form validation when a body is set for a GET
if self.method.data == 'GET' and self.body.data:
self.body.errors.append('Body must be empty when Request Method is set to GET')
result = False
# Attempt to validate jinja2 templates in the URL
try:
jinja_render(template_str=self.url.data)
except ModuleNotFoundError as e:
# incase jinja2_time or others is missing
logger.error(e)
self.url.errors.append(f'Invalid template syntax configuration: {e}')
result = False
except Exception as e:
logger.error(e)
self.url.errors.append(f'Invalid template syntax: {e}')
result = False
# Attempt to validate jinja2 templates in the body
if self.body.data and self.body.data.strip():
try:
jinja_render(template_str=self.body.data)
except ModuleNotFoundError as e:
# incase jinja2_time or others is missing
logger.error(e)
self.body.errors.append(f'Invalid template syntax configuration: {e}')
result = False
except Exception as e:
logger.error(e)
self.body.errors.append(f'Invalid template syntax: {e}')
result = False
# Attempt to validate jinja2 templates in the headers
if len(self.headers.data) > 0:
try:
for header, value in self.headers.data.items():
jinja_render(template_str=value)
except ModuleNotFoundError as e:
# incase jinja2_time or others is missing
logger.error(e)
self.headers.errors.append(f'Invalid template syntax configuration: {e}')
result = False
except Exception as e:
logger.error(e)
self.headers.errors.append(f'Invalid template syntax in "{header}" header: {e}')
result = False
return result
def __init__(
self,
formdata=None,
obj=None,
prefix="",
data=None,
meta=None,
**kwargs,
):
super().__init__(formdata, obj, prefix, data, meta, **kwargs)
if kwargs and kwargs.get('default_system_settings'):
default_tz = kwargs.get('default_system_settings').get('application', {}).get('timezone')
if default_tz:
self.time_schedule_limit.form.timezone.render_kw['placeholder'] = default_tz
class SingleExtraProxy(Form):
# maybe better to set some <script>var..
proxy_name = StringField('Name', [validators.Optional()], render_kw={"placeholder": "Name"})
proxy_url = StringField('Proxy URL', [validators.Optional()], render_kw={"placeholder": "socks5:// or regular proxy http://user:pass@...:3128", "size":50})
# @todo do the validation here instead
class SingleExtraBrowser(Form):
browser_name = StringField('Name', [validators.Optional()], render_kw={"placeholder": "Name"})
browser_connection_url = StringField('Browser connection URL', [validators.Optional()], render_kw={"placeholder": "wss://brightdata... wss://oxylabs etc", "size":50})
# @todo do the validation here instead
class DefaultUAInputForm(Form):
html_requests = StringField('Plaintext requests', validators=[validators.Optional()], render_kw={"placeholder": "<default>"})
if os.getenv("PLAYWRIGHT_DRIVER_URL") or os.getenv("WEBDRIVER_URL"):
html_webdriver = StringField('Chrome requests', validators=[validators.Optional()], render_kw={"placeholder": "<default>"})
# datastore.data['settings']['requests']..
class globalSettingsRequestForm(Form):
time_between_check = FormField(TimeBetweenCheckForm)
time_schedule_limit = FormField(ScheduleLimitForm)
proxy = RadioField('Proxy')
jitter_seconds = IntegerField('Random jitter seconds ± check',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")])
extra_proxies = FieldList(FormField(SingleExtraProxy), min_entries=5)
extra_browsers = FieldList(FormField(SingleExtraBrowser), min_entries=5)
default_ua = FormField(DefaultUAInputForm, label="Default User-Agent overrides")
def validate_extra_proxies(self, extra_validators=None):
for e in self.data['extra_proxies']:
if e.get('proxy_name') or e.get('proxy_url'):
if not e.get('proxy_name','').strip() or not e.get('proxy_url','').strip():
self.extra_proxies.errors.append('Both a name, and a Proxy URL is required.')
return False
# datastore.data['settings']['application']..
class globalSettingsApplicationForm(commonSettingsForm):
api_access_token_enabled = BooleanField('API access token security check enabled', default=True, validators=[validators.Optional()])
base_url = StringField('Notification base URL override',
validators=[validators.Optional()],
render_kw={"placeholder": os.getenv('BASE_URL', 'Not set')}
)
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetchers.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
global_ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
global_subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_json=False)])
ignore_whitespace = BooleanField('Ignore whitespace')
password = SaltyPasswordField()
pager_size = IntegerField('Pager size',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message="Should be atleast zero (disabled)")])
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
shared_diff_access = BooleanField('Allow access to view diff page when password is enabled', default=False, validators=[validators.Optional()])
rss_hide_muted_watches = BooleanField('Hide muted watches from RSS feed', default=True,
validators=[validators.Optional()])
filter_failure_notification_threshold_attempts = IntegerField('Number of times the filter can be missing before sending a notification',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message="Should contain zero or more attempts")])
class globalSettingsForm(Form):
# Define these as FormFields/"sub forms", this way it matches the JSON storage
# datastore.data['settings']['application']..
# datastore.data['settings']['requests']..
def __init__(self, formdata=None, obj=None, prefix="", data=None, meta=None, **kwargs):
super().__init__(formdata, obj, prefix, data, meta, **kwargs)
self.application.notification_body.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
self.application.notification_title.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
self.application.notification_urls.extra_notification_tokens = kwargs.get('extra_notification_tokens', {})
requests = FormField(globalSettingsRequestForm)
application = FormField(globalSettingsApplicationForm)
save_button = SubmitField('Save', render_kw={"class": "pure-button button-small pure-button-primary"})
class extractDataForm(Form):
extract_regex = StringField('RegEx to extract', validators=[validators.Length(min=1, message="Needs a RegEx")])
extract_submit_button = SubmitField('Extract as CSV', render_kw={"class": "pure-button pure-button-primary"})

View File

@@ -0,0 +1,539 @@
from typing import List
from lxml import etree
import json
import re
# HTML added to be sure each result matching a filter (.example) gets converted to a new line by Inscriptis
TEXT_FILTER_LIST_LINE_SUFFIX = "<br>"
TRANSLATE_WHITESPACE_TABLE = str.maketrans('', '', '\r\n\t ')
PERL_STYLE_REGEX = r'^/(.*?)/([a-z]*)?$'
# 'price' , 'lowPrice', 'highPrice' are usually under here
# All of those may or may not appear on different websites - I didnt find a way todo case-insensitive searching here
LD_JSON_PRODUCT_OFFER_SELECTORS = ["json:$..offers", "json:$..Offers"]
class JSONNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Doesn't look like python supports forward slash auto enclosure in re.findall
# So convert it to inline flag "(?i)foobar" type configuration
def perl_style_slash_enclosed_regex_to_options(regex):
res = re.search(PERL_STYLE_REGEX, regex, re.IGNORECASE)
if res:
flags = res.group(2) if res.group(2) else 'i'
regex = f"(?{flags}){res.group(1)}"
else:
# Fall back to just ignorecase as an option
regex = f"(?i){regex}"
return regex
# Given a CSS Rule, and a blob of HTML, return the blob of HTML that matches
def include_filters(include_filters, html_content, append_pretty_line_formatting=False):
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
html_block = ""
r = soup.select(include_filters, separator="")
for element in r:
# When there's more than 1 match, then add the suffix to separate each line
# And where the matched result doesn't include something that will cause Inscriptis to add a newline
# (This way each 'match' reliably has a new-line in the diff)
# Divs are converted to 4 whitespaces by inscriptis
if append_pretty_line_formatting and len(html_block) and not element.name in (['br', 'hr', 'div', 'p']):
html_block += TEXT_FILTER_LIST_LINE_SUFFIX
html_block += str(element)
return html_block
def subtractive_css_selector(css_selector, html_content):
from bs4 import BeautifulSoup
soup = BeautifulSoup(html_content, "html.parser")
# So that the elements dont shift their index, build a list of elements here which will be pointers to their place in the DOM
elements_to_remove = soup.select(css_selector)
# Then, remove them in a separate loop
for item in elements_to_remove:
item.decompose()
return str(soup)
def subtractive_xpath_selector(selectors: List[str], html_content: str) -> str:
# Parse the HTML content using lxml
html_tree = etree.HTML(html_content)
# First, collect all elements to remove
elements_to_remove = []
# Iterate over the list of XPath selectors
for selector in selectors:
# Collect elements for each selector
elements_to_remove.extend(html_tree.xpath(selector))
# Then, remove them in a separate loop
for element in elements_to_remove:
if element.getparent() is not None: # Ensure the element has a parent before removing
element.getparent().remove(element)
# Convert the modified HTML tree back to a string
modified_html = etree.tostring(html_tree, method="html").decode("utf-8")
return modified_html
def element_removal(selectors: List[str], html_content):
"""Removes elements that match a list of CSS or XPath selectors."""
modified_html = html_content
css_selectors = []
xpath_selectors = []
for selector in selectors:
if selector.startswith(('xpath:', 'xpath1:', '//')):
# Handle XPath selectors separately
xpath_selector = selector.removeprefix('xpath:').removeprefix('xpath1:')
xpath_selectors.append(xpath_selector)
else:
# Collect CSS selectors as one "hit", see comment in subtractive_css_selector
css_selectors.append(selector.strip().strip(","))
if xpath_selectors:
modified_html = subtractive_xpath_selector(xpath_selectors, modified_html)
if css_selectors:
# Remove duplicates, then combine all CSS selectors into one string, separated by commas
# This stops the elements index shifting
unique_selectors = list(set(css_selectors)) # Ensure uniqueness
combined_css_selector = " , ".join(unique_selectors)
modified_html = subtractive_css_selector(combined_css_selector, modified_html)
return modified_html
def elementpath_tostring(obj):
"""
change elementpath.select results to string type
# The MIT License (MIT), Copyright (c), 2018-2021, SISSA (Scuola Internazionale Superiore di Studi Avanzati)
# https://github.com/sissaschool/elementpath/blob/dfcc2fd3d6011b16e02bf30459a7924f547b47d0/elementpath/xpath_tokens.py#L1038
"""
import elementpath
from decimal import Decimal
import math
if obj is None:
return ''
# https://elementpath.readthedocs.io/en/latest/xpath_api.html#elementpath.select
elif isinstance(obj, elementpath.XPathNode):
return obj.string_value
elif isinstance(obj, bool):
return 'true' if obj else 'false'
elif isinstance(obj, Decimal):
value = format(obj, 'f')
if '.' in value:
return value.rstrip('0').rstrip('.')
return value
elif isinstance(obj, float):
if math.isnan(obj):
return 'NaN'
elif math.isinf(obj):
return str(obj).upper()
value = str(obj)
if '.' in value:
value = value.rstrip('0').rstrip('.')
if '+' in value:
value = value.replace('+', '')
if 'e' in value:
return value.upper()
return value
return str(obj)
# Return str Utf-8 of matched rules
def xpath_filter(xpath_filter, html_content, append_pretty_line_formatting=False, is_rss=False):
from lxml import etree, html
import elementpath
# xpath 2.0-3.1
from elementpath.xpath3 import XPath3Parser
parser = etree.HTMLParser()
if is_rss:
# So that we can keep CDATA for cdata_in_document_to_text() to process
parser = etree.XMLParser(strip_cdata=False)
tree = html.fromstring(bytes(html_content, encoding='utf-8'), parser=parser)
html_block = ""
r = elementpath.select(tree, xpath_filter.strip(), namespaces={'re': 'http://exslt.org/regular-expressions'}, parser=XPath3Parser)
#@note: //title/text() wont work where <title>CDATA..
if type(r) != list:
r = [r]
for element in r:
# When there's more than 1 match, then add the suffix to separate each line
# And where the matched result doesn't include something that will cause Inscriptis to add a newline
# (This way each 'match' reliably has a new-line in the diff)
# Divs are converted to 4 whitespaces by inscriptis
if append_pretty_line_formatting and len(html_block) and (not hasattr( element, 'tag' ) or not element.tag in (['br', 'hr', 'div', 'p'])):
html_block += TEXT_FILTER_LIST_LINE_SUFFIX
if type(element) == str:
html_block += element
elif issubclass(type(element), etree._Element) or issubclass(type(element), etree._ElementTree):
html_block += etree.tostring(element, pretty_print=True).decode('utf-8')
else:
html_block += elementpath_tostring(element)
return html_block
# Return str Utf-8 of matched rules
# 'xpath1:'
def xpath1_filter(xpath_filter, html_content, append_pretty_line_formatting=False, is_rss=False):
from lxml import etree, html
parser = None
if is_rss:
# So that we can keep CDATA for cdata_in_document_to_text() to process
parser = etree.XMLParser(strip_cdata=False)
tree = html.fromstring(bytes(html_content, encoding='utf-8'), parser=parser)
html_block = ""
r = tree.xpath(xpath_filter.strip(), namespaces={'re': 'http://exslt.org/regular-expressions'})
#@note: //title/text() wont work where <title>CDATA..
for element in r:
# When there's more than 1 match, then add the suffix to separate each line
# And where the matched result doesn't include something that will cause Inscriptis to add a newline
# (This way each 'match' reliably has a new-line in the diff)
# Divs are converted to 4 whitespaces by inscriptis
if append_pretty_line_formatting and len(html_block) and (not hasattr(element, 'tag') or not element.tag in (['br', 'hr', 'div', 'p'])):
html_block += TEXT_FILTER_LIST_LINE_SUFFIX
# Some kind of text, UTF-8 or other
if isinstance(element, (str, bytes)):
html_block += element
else:
# Return the HTML which will get parsed as text
html_block += etree.tostring(element, pretty_print=True).decode('utf-8')
return html_block
# Extract/find element
def extract_element(find='title', html_content=''):
from bs4 import BeautifulSoup
#Re #106, be sure to handle when its not found
element_text = None
soup = BeautifulSoup(html_content, 'html.parser')
result = soup.find(find)
if result and result.string:
element_text = result.string.strip()
return element_text
#
def _parse_json(json_data, json_filter):
from jsonpath_ng.ext import parse
if json_filter.startswith("json:"):
jsonpath_expression = parse(json_filter.replace('json:', ''))
match = jsonpath_expression.find(json_data)
return _get_stripped_text_from_json_match(match)
if json_filter.startswith("jq:") or json_filter.startswith("jqraw:"):
try:
import jq
except ModuleNotFoundError:
# `jq` requires full compilation in windows and so isn't generally available
raise Exception("jq not support not found")
if json_filter.startswith("jq:"):
jq_expression = jq.compile(json_filter.removeprefix("jq:"))
match = jq_expression.input(json_data).all()
return _get_stripped_text_from_json_match(match)
if json_filter.startswith("jqraw:"):
jq_expression = jq.compile(json_filter.removeprefix("jqraw:"))
match = jq_expression.input(json_data).all()
return '\n'.join(str(item) for item in match)
def _get_stripped_text_from_json_match(match):
s = []
# More than one result, we will return it as a JSON list.
if len(match) > 1:
for i in match:
s.append(i.value if hasattr(i, 'value') else i)
# Single value, use just the value, as it could be later used in a token in notifications.
if len(match) == 1:
s = match[0].value if hasattr(match[0], 'value') else match[0]
# Re #257 - Better handling where it does not exist, in the case the original 's' value was False..
if not match:
# Re 265 - Just return an empty string when filter not found
return ''
# Ticket #462 - allow the original encoding through, usually it's UTF-8 or similar
stripped_text_from_html = json.dumps(s, indent=4, ensure_ascii=False)
return stripped_text_from_html
# content - json
# json_filter - ie json:$..price
# ensure_is_ldjson_info_type - str "product", optional, "@type == product" (I dont know how to do that as a json selector)
def extract_json_as_string(content, json_filter, ensure_is_ldjson_info_type=None):
from bs4 import BeautifulSoup
stripped_text_from_html = False
# https://github.com/dgtlmoon/changedetection.io/pull/2041#issuecomment-1848397161w
# Try to parse/filter out the JSON, if we get some parser error, then maybe it's embedded within HTML tags
try:
stripped_text_from_html = _parse_json(json.loads(content), json_filter)
except json.JSONDecodeError:
# Foreach <script json></script> blob.. just return the first that matches json_filter
# As a last resort, try to parse the whole <body>
soup = BeautifulSoup(content, 'html.parser')
if ensure_is_ldjson_info_type:
bs_result = soup.findAll('script', {"type": "application/ld+json"})
else:
bs_result = soup.findAll('script')
bs_result += soup.findAll('body')
bs_jsons = []
for result in bs_result:
# Skip empty tags, and things that dont even look like JSON
if not result.text or '{' not in result.text:
continue
try:
json_data = json.loads(result.text)
bs_jsons.append(json_data)
except json.JSONDecodeError:
# Skip objects which cannot be parsed
continue
if not bs_jsons:
raise JSONNotFound("No parsable JSON found in this document")
for json_data in bs_jsons:
stripped_text_from_html = _parse_json(json_data, json_filter)
if ensure_is_ldjson_info_type:
# Could sometimes be list, string or something else random
if isinstance(json_data, dict):
# If it has LD JSON 'key' @type, and @type is 'product', and something was found for the search
# (Some sites have multiple of the same ld+json @type='product', but some have the review part, some have the 'price' part)
# @type could also be a list although non-standard ("@type": ["Product", "SubType"],)
# LD_JSON auto-extract also requires some content PLUS the ldjson to be present
# 1833 - could be either str or dict, should not be anything else
t = json_data.get('@type')
if t and stripped_text_from_html:
if isinstance(t, str) and t.lower() == ensure_is_ldjson_info_type.lower():
break
# The non-standard part, some have a list
elif isinstance(t, list):
if ensure_is_ldjson_info_type.lower() in [x.lower().strip() for x in t]:
break
elif stripped_text_from_html:
break
if not stripped_text_from_html:
# Re 265 - Just return an empty string when filter not found
return ''
return stripped_text_from_html
# Mode - "content" return the content without the matches (default)
# - "line numbers" return a list of line numbers that match (int list)
#
# wordlist - list of regex's (str) or words (str)
# Preserves all linefeeds and other whitespacing, its not the job of this to remove that
def strip_ignore_text(content, wordlist, mode="content"):
i = 0
output = []
ignore_text = []
ignore_regex = []
ignored_line_numbers = []
for k in wordlist:
# Is it a regex?
res = re.search(PERL_STYLE_REGEX, k, re.IGNORECASE)
if res:
ignore_regex.append(re.compile(perl_style_slash_enclosed_regex_to_options(k)))
else:
ignore_text.append(k.strip())
for line in content.splitlines(keepends=True):
i += 1
# Always ignore blank lines in this mode. (when this function gets called)
got_match = False
for l in ignore_text:
if l.lower() in line.lower():
got_match = True
if not got_match:
for r in ignore_regex:
if r.search(line):
got_match = True
if not got_match:
# Not ignored, and should preserve "keepends"
output.append(line)
else:
ignored_line_numbers.append(i)
# Used for finding out what to highlight
if mode == "line numbers":
return ignored_line_numbers
return ''.join(output)
def cdata_in_document_to_text(html_content: str, render_anchor_tag_content=False) -> str:
from xml.sax.saxutils import escape as xml_escape
pattern = '<!\[CDATA\[(\s*(?:.(?<!\]\]>)\s*)*)\]\]>'
def repl(m):
text = m.group(1)
return xml_escape(html_to_text(html_content=text)).strip()
return re.sub(pattern, repl, html_content)
def html_to_text(html_content: str, render_anchor_tag_content=False, is_rss=False) -> str:
from inscriptis import get_text
from inscriptis.model.config import ParserConfig
"""Converts html string to a string with just the text. If ignoring
rendering anchor tag content is enable, anchor tag content are also
included in the text
:param html_content: string with html content
:param render_anchor_tag_content: boolean flag indicating whether to extract
hyperlinks (the anchor tag content) together with text. This refers to the
'href' inside 'a' tags.
Anchor tag content is rendered in the following manner:
'[ text ](anchor tag content)'
:return: extracted text from the HTML
"""
# if anchor tag content flag is set to True define a config for
# extracting this content
if render_anchor_tag_content:
parser_config = ParserConfig(
annotation_rules={"a": ["hyperlink"]},
display_links=True
)
# otherwise set config to None/default
else:
parser_config = None
# RSS Mode - Inscriptis will treat `title` as something else.
# Make it as a regular block display element (//item/title)
# This is a bit of a hack - the real way it to use XSLT to convert it to HTML #1874
if is_rss:
html_content = re.sub(r'<title([\s>])', r'<h1\1', html_content)
html_content = re.sub(r'</title>', r'</h1>', html_content)
text_content = get_text(html_content, config=parser_config)
return text_content
# Does LD+JSON exist with a @type=='product' and a .price set anywhere?
def has_ldjson_product_info(content):
try:
lc = content.lower()
if 'application/ld+json' in lc and lc.count('"price"') == 1 and '"pricecurrency"' in lc:
return True
# On some pages this is really terribly expensive when they dont really need it
# (For example you never want price monitoring, but this runs on every watch to suggest it)
# for filter in LD_JSON_PRODUCT_OFFER_SELECTORS:
# pricing_data += extract_json_as_string(content=content,
# json_filter=filter,
# ensure_is_ldjson_info_type="product")
except Exception as e:
# OK too
return False
return False
def workarounds_for_obfuscations(content):
"""
Some sites are using sneaky tactics to make prices and other information un-renderable by Inscriptis
This could go into its own Pip package in the future, for faster updates
"""
# HomeDepot.com style <span>$<!-- -->90<!-- -->.<!-- -->74</span>
# https://github.com/weblyzard/inscriptis/issues/45
if not content:
return content
content = re.sub('<!--\s+-->', '', content)
return content
def get_triggered_text(content, trigger_text):
triggered_text = []
result = strip_ignore_text(content=content,
wordlist=trigger_text,
mode="line numbers")
i = 1
for p in content.splitlines():
if i in result:
triggered_text.append(p)
i += 1
return triggered_text
from bs4 import BeautifulSoup
import html
def escape_mixed_content(document):
import uuid
# Parse the document as HTML
# Generate a single random hash for placeholders
random_hash = f"__PLACEHOLDER_{uuid.uuid4().hex}__"
placeholder_map = []
# <br> to something else so we can preserve them
random_hash_br = f"__BR_{uuid.uuid4().hex}__"
document = document.replace('<br>', random_hash_br)
soup = BeautifulSoup(document, 'html.parser')
# Find all <span class="cdio"> and <br>/<br/>
for tag in soup.find_all("span", class_="cdio"):
placeholder_map.append(str(tag)) # Save the tag as a string
tag.replace_with(random_hash) # Replace tag with the placeholder
# Escape the entire document
escaped_html = html.escape(str(soup))
# Restore all occurrences of placeholders with the original tags
for original_tag in placeholder_map:
escaped_html = escaped_html.replace(random_hash, original_tag, 1) # Replace one occurrence at a time
escaped_html = escaped_html.replace( random_hash_br, "<br>")
return escaped_html

View File

@@ -0,0 +1,303 @@
from abc import ABC, abstractmethod
import time
import validators
from wtforms import ValidationError
from loguru import logger
from changedetectionio.forms import validate_url
class Importer():
remaining_data = []
new_uuids = []
good = 0
def __init__(self):
self.new_uuids = []
self.good = 0
self.remaining_data = []
self.import_profile = None
@abstractmethod
def run(self,
data,
flash,
datastore):
pass
class import_url_list(Importer):
"""
Imports a list, can be in <code>https://example.com tag1, tag2, last tag</code> format
"""
def run(self,
data,
flash,
datastore,
processor=None
):
urls = data.split("\n")
good = 0
now = time.time()
if (len(urls) > 5000):
flash("Importing 5,000 of the first URLs from your list, the rest can be imported again.")
for url in urls:
url = url.strip()
if not len(url):
continue
tags = ""
# 'tags' should be a csv list after the URL
if ' ' in url:
url, tags = url.split(" ", 1)
# Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server
# @todo validators.url will fail when you add your own IP etc
if len(url) and 'http' in url.lower() and good < 5000:
extras = None
if processor:
extras = {'processor': processor}
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False, extras=extras)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
continue
# Worked past the 'continue' above, append it to the bad list
if self.remaining_data is None:
self.remaining_data = []
self.remaining_data.append(url)
flash("{} Imported from list in {:.2f}s, {} Skipped.".format(good, time.time() - now, len(self.remaining_data)))
class import_distill_io_json(Importer):
def run(self,
data,
flash,
datastore,
):
import json
good = 0
now = time.time()
self.new_uuids=[]
# @todo Use JSONSchema like in the API to validate here.
try:
data = json.loads(data.strip())
except json.decoder.JSONDecodeError:
flash("Unable to read JSON file, was it broken?", 'error')
return
if not data.get('data'):
flash("JSON structure looks invalid, was it broken?", 'error')
return
for d in data.get('data'):
d_config = json.loads(d['config'])
extras = {'title': d.get('name', None)}
if len(d['uri']) and good < 5000:
try:
# @todo we only support CSS ones at the moment
if d_config['selections'][0]['frames'][0]['excludes'][0]['type'] == 'css':
extras['subtractive_selectors'] = d_config['selections'][0]['frames'][0]['excludes'][0]['expr']
except KeyError:
pass
except IndexError:
pass
extras['include_filters'] = []
try:
if d_config['selections'][0]['frames'][0]['includes'][0]['type'] == 'xpath':
extras['include_filters'].append('xpath:' + d_config['selections'][0]['frames'][0]['includes'][0]['expr'])
else:
extras['include_filters'].append(d_config['selections'][0]['frames'][0]['includes'][0]['expr'])
except KeyError:
pass
except IndexError:
pass
new_uuid = datastore.add_watch(url=d['uri'].strip(),
tag=",".join(d.get('tags', [])),
extras=extras,
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
flash("{} Imported from Distill.io in {:.2f}s, {} Skipped.".format(len(self.new_uuids), time.time() - now, len(self.remaining_data)))
class import_xlsx_wachete(Importer):
def run(self,
data,
flash,
datastore,
):
good = 0
now = time.time()
self.new_uuids = []
from openpyxl import load_workbook
try:
wb = load_workbook(data)
except Exception as e:
# @todo correct except
flash("Unable to read export XLSX file, something wrong with the file?", 'error')
return
row_id = 2
for row in wb.active.iter_rows(min_row=row_id):
try:
extras = {}
data = {}
for cell in row:
if not cell.value:
continue
column_title = wb.active.cell(row=1, column=cell.column).value.strip().lower()
data[column_title] = cell.value
# Forced switch to webdriver/playwright/etc
dynamic_wachet = str(data.get('dynamic wachet', '')).strip().lower() # Convert bool to str to cover all cases
# libreoffice and others can have it as =FALSE() =TRUE(), or bool(true)
if 'true' in dynamic_wachet or dynamic_wachet == '1':
extras['fetch_backend'] = 'html_webdriver'
elif 'false' in dynamic_wachet or dynamic_wachet == '0':
extras['fetch_backend'] = 'html_requests'
if data.get('xpath'):
# @todo split by || ?
extras['include_filters'] = [data.get('xpath')]
if data.get('name'):
extras['title'] = data.get('name').strip()
if data.get('interval (min)'):
minutes = int(data.get('interval (min)'))
hours, minutes = divmod(minutes, 60)
days, hours = divmod(hours, 24)
weeks, days = divmod(days, 7)
extras['time_between_check'] = {'weeks': weeks, 'days': days, 'hours': hours, 'minutes': minutes, 'seconds': 0}
# At minimum a URL is required.
if data.get('url'):
try:
validate_url(data.get('url'))
except ValidationError as e:
logger.error(f">> Import URL error {data.get('url')} {str(e)}")
flash(f"Error processing row number {row_id}, URL value was incorrect, row was skipped.", 'error')
# Don't bother processing anything else on this row
continue
new_uuid = datastore.add_watch(url=data['url'].strip(),
extras=extras,
tag=data.get('folder'),
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
except Exception as e:
logger.error(e)
flash(f"Error processing row number {row_id}, check all cell data types are correct, row was skipped.", 'error')
else:
row_id += 1
flash(
"{} imported from Wachete .xlsx in {:.2f}s".format(len(self.new_uuids), time.time() - now))
class import_xlsx_custom(Importer):
def run(self,
data,
flash,
datastore,
):
good = 0
now = time.time()
self.new_uuids = []
from openpyxl import load_workbook
try:
wb = load_workbook(data)
except Exception as e:
# @todo correct except
flash("Unable to read export XLSX file, something wrong with the file?", 'error')
return
# @todo cehck atleast 2 rows, same in other method
from .forms import validate_url
row_i = 1
try:
for row in wb.active.iter_rows():
url = None
tags = None
extras = {}
for cell in row:
if not self.import_profile.get(cell.col_idx):
continue
if not cell.value:
continue
cell_map = self.import_profile.get(cell.col_idx)
cell_val = str(cell.value).strip() # could be bool
if cell_map == 'url':
url = cell.value.strip()
try:
validate_url(url)
except ValidationError as e:
logger.error(f">> Import URL error {url} {str(e)}")
flash(f"Error processing row number {row_i}, URL value was incorrect, row was skipped.", 'error')
# Don't bother processing anything else on this row
url = None
break
elif cell_map == 'tag':
tags = cell.value.strip()
elif cell_map == 'include_filters':
# @todo validate?
extras['include_filters'] = [cell.value.strip()]
elif cell_map == 'interval_minutes':
hours, minutes = divmod(int(cell_val), 60)
days, hours = divmod(hours, 24)
weeks, days = divmod(days, 7)
extras['time_between_check'] = {'weeks': weeks, 'days': days, 'hours': hours, 'minutes': minutes, 'seconds': 0}
else:
extras[cell_map] = cell_val
# At minimum a URL is required.
if url:
new_uuid = datastore.add_watch(url=url,
extras=extras,
tag=tags,
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
except Exception as e:
logger.error(e)
flash(f"Error processing row number {row_i}, check all cell data types are correct, row was skipped.", 'error')
else:
row_i += 1
flash(
"{} imported from custom .xlsx in {:.2f}s".format(len(self.new_uuids), time.time() - now))

View File

@@ -0,0 +1,75 @@
from os import getenv
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
)
# Equal to or greater than this number of FilterNotFoundInResponse exceptions will trigger a filter-not-found notification
_FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT = 6
DEFAULT_SETTINGS_HEADERS_USERAGENT='Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36'
class model(dict):
base_config = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'settings': {
'headers': {
},
'requests': {
'extra_proxies': [], # Configurable extra proxies via the UI
'extra_browsers': [], # Configurable extra proxies via the UI
'jitter_seconds': 0,
'proxy': None, # Preferred proxy connection
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'timeout': int(getenv("DEFAULT_SETTINGS_REQUESTS_TIMEOUT", "45")), # Default 45 seconds
'workers': int(getenv("DEFAULT_SETTINGS_REQUESTS_WORKERS", "10")), # Number of threads, lower is better for slow connections
'default_ua': {
'html_requests': getenv("DEFAULT_SETTINGS_HEADERS_USERAGENT", DEFAULT_SETTINGS_HEADERS_USERAGENT),
'html_webdriver': None,
}
},
'application': {
# Custom notification content
'api_access_token_enabled': True,
'base_url' : None,
'empty_pages_are_a_change': False,
'extract_title_as_title': False,
'fetch_backend': getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT,
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [],
'ignore_whitespace': True,
'notification_body': default_notification_body,
'notification_format': default_notification_format,
'notification_title': default_notification_title,
'notification_urls': [], # Apprise URL list
'pager_size': 50,
'password': False,
'render_anchor_tag_content': False,
'rss_access_token': None,
'rss_hide_muted_watches': True,
'schema_version' : 0,
'shared_diff_access': False,
'webdriver_delay': None , # Extra delay in seconds before extracting text
'tags': {}, #@todo use Tag.model initialisers
'timezone': None, # Default IANA timezone name
}
}
}
def __init__(self, *arg, **kw):
super(model, self).__init__(*arg, **kw)
self.update(self.base_config)
def parse_headers_from_text_file(filepath):
headers = {}
with open(filepath, 'r') as f:
for l in f.readlines():
l = l.strip()
if not l.startswith('#') and ':' in l:
(k, v) = l.split(':', 1) # Split only on the first colon
headers[k.strip()] = v.strip()
return headers

View File

@@ -0,0 +1,14 @@
from changedetectionio.model import watch_base
class model(watch_base):
def __init__(self, *arg, **kw):
super(model, self).__init__(*arg, **kw)
self['overrides_watch'] = kw.get('default', {}).get('overrides_watch')
if kw.get('default'):
self.update(kw['default'])
del kw['default']

View File

@@ -0,0 +1,636 @@
from changedetectionio.strtobool import strtobool
from changedetectionio.safe_jinja import render as jinja_render
from . import watch_base
import os
import re
from pathlib import Path
from loguru import logger
from ..html_tools import TRANSLATE_WHITESPACE_TABLE
# Allowable protocols, protects against javascript: etc
# file:// is further checked by ALLOW_FILE_URI
SAFE_PROTOCOL_REGEX='^(http|https|ftp|file):'
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 3))
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
def is_safe_url(test_url):
# See https://github.com/dgtlmoon/changedetection.io/issues/1358
# Remove 'source:' prefix so we dont get 'source:javascript:' etc
# 'source:' is a valid way to tell us to return the source
r = re.compile(re.escape('source:'), re.IGNORECASE)
test_url = r.sub('', test_url)
pattern = re.compile(os.getenv('SAFE_PROTOCOL_REGEX', SAFE_PROTOCOL_REGEX), re.IGNORECASE)
if not pattern.match(test_url.strip()):
return False
return True
class model(watch_base):
__newest_history_key = None
__history_n = 0
jitter_seconds = 0
def __init__(self, *arg, **kw):
self.__datastore_path = kw.get('datastore_path')
if kw.get('datastore_path'):
del kw['datastore_path']
super(model, self).__init__(*arg, **kw)
if kw.get('default'):
self.update(kw['default'])
del kw['default']
if self.get('default'):
del self['default']
# Be sure the cached timestamp is ready
bump = self.history
@property
def viewed(self):
# Don't return viewed when last_viewed is 0 and newest_key is 0
if int(self['last_viewed']) and int(self['last_viewed']) >= int(self.newest_history_key) :
return True
return False
def ensure_data_dir_exists(self):
if not os.path.isdir(self.watch_data_dir):
logger.debug(f"> Creating data dir {self.watch_data_dir}")
os.mkdir(self.watch_data_dir)
@property
def link(self):
url = self.get('url', '')
if not is_safe_url(url):
return 'DISABLED'
ready_url = url
if '{%' in url or '{{' in url:
# Jinja2 available in URLs along with https://pypi.org/project/jinja2-time/
try:
ready_url = jinja_render(template_str=url)
except Exception as e:
logger.critical(f"Invalid URL template for: '{url}' - {str(e)}")
from flask import (
flash, Markup, url_for
)
message = Markup('<a href="{}#general">The URL {} is invalid and cannot be used, click to edit</a>'.format(
url_for('edit_page', uuid=self.get('uuid')), self.get('url', '')))
flash(message, 'error')
return ''
if ready_url.startswith('source:'):
ready_url=ready_url.replace('source:', '')
# Also double check it after any Jinja2 formatting just incase
if not is_safe_url(ready_url):
return 'DISABLED'
return ready_url
def clear_watch(self):
import pathlib
# JSON Data, Screenshots, Textfiles (history index and snapshots), HTML in the future etc
for item in pathlib.Path(str(self.watch_data_dir)).rglob("*.*"):
os.unlink(item)
# Force the attr to recalculate
bump = self.history
# Do this last because it will trigger a recheck due to last_checked being zero
self.update({
'browser_steps_last_error_step': None,
'check_count': 0,
'fetch_time': 0.0,
'has_ldjson_price_data': None,
'last_checked': 0,
'last_error': False,
'last_notification_error': False,
'last_viewed': 0,
'previous_md5': False,
'previous_md5_before_filters': False,
'remote_server_reply': None,
'track_ldjson_price_data': None
})
return
@property
def is_source_type_url(self):
return self.get('url', '').startswith('source:')
@property
def get_fetch_backend(self):
"""
Like just using the `fetch_backend` key but there could be some logic
:return:
"""
# Maybe also if is_image etc?
# This is because chrome/playwright wont render the PDF in the browser and we will just fetch it and use pdf2html to see the text.
if self.is_pdf:
return 'html_requests'
return self.get('fetch_backend')
@property
def is_pdf(self):
# content_type field is set in the future
# https://github.com/dgtlmoon/changedetection.io/issues/1392
# Not sure the best logic here
return self.get('url', '').lower().endswith('.pdf') or 'pdf' in self.get('content_type', '').lower()
@property
def label(self):
# Used for sorting
return self.get('title') if self.get('title') else self.get('url')
@property
def last_changed(self):
# last_changed will be the newest snapshot, but when we have just one snapshot, it should be 0
if self.__history_n <= 1:
return 0
if self.__newest_history_key:
return int(self.__newest_history_key)
return 0
@property
def history_n(self):
return self.__history_n
@property
def history(self):
"""History index is just a text file as a list
{watch-uuid}/history.txt
contains a list like
{epoch-time},{filename}\n
We read in this list as the history information
"""
tmp_history = {}
# In the case we are only using the watch for processing without history
if not self.watch_data_dir:
return []
# Read the history file as a dict
fname = os.path.join(self.watch_data_dir, "history.txt")
if os.path.isfile(fname):
logger.debug(f"Reading watch history index for {self.get('uuid')}")
with open(fname, "r") as f:
for i in f.readlines():
if ',' in i:
k, v = i.strip().split(',', 2)
# The index history could contain a relative path, so we need to make the fullpath
# so that python can read it
if not '/' in v and not '\'' in v:
v = os.path.join(self.watch_data_dir, v)
else:
# It's possible that they moved the datadir on older versions
# So the snapshot exists but is in a different path
snapshot_fname = v.split('/')[-1]
proposed_new_path = os.path.join(self.watch_data_dir, snapshot_fname)
if not os.path.exists(v) and os.path.exists(proposed_new_path):
v = proposed_new_path
tmp_history[k] = v
if len(tmp_history):
self.__newest_history_key = list(tmp_history.keys())[-1]
else:
self.__newest_history_key = None
self.__history_n = len(tmp_history)
return tmp_history
@property
def has_history(self):
fname = os.path.join(self.watch_data_dir, "history.txt")
return os.path.isfile(fname)
@property
def has_browser_steps(self):
has_browser_steps = self.get('browser_steps') and list(filter(
lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one' and s['operation'] != 'Goto site'),
self.get('browser_steps')))
return has_browser_steps
@property
def has_restock_info(self):
if self.get('restock') and self['restock'].get('in_stock') != None:
return True
return False
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
@property
def newest_history_key(self):
if self.__newest_history_key is not None:
return self.__newest_history_key
if len(self.history) <= 1:
return 0
bump = self.history
return self.__newest_history_key
# Given an arbitrary timestamp, find the best history key for the [diff] button so it can preset a smarter from_version
@property
def get_from_version_based_on_last_viewed(self):
"""Unfortunately for now timestamp is stored as string key"""
keys = list(self.history.keys())
if not keys:
return None
if len(keys) == 1:
return keys[0]
last_viewed = int(self.get('last_viewed'))
sorted_keys = sorted(keys, key=lambda x: int(x))
sorted_keys.reverse()
# When the 'last viewed' timestamp is greater than or equal the newest snapshot, return second newest
if last_viewed >= int(sorted_keys[0]):
return sorted_keys[1]
# When the 'last viewed' timestamp is between snapshots, return the older snapshot
for newer, older in list(zip(sorted_keys[0:], sorted_keys[1:])):
if last_viewed < int(newer) and last_viewed >= int(older):
return older
# When the 'last viewed' timestamp is less than the oldest snapshot, return oldest
return sorted_keys[-1]
def get_history_snapshot(self, timestamp):
import brotli
filepath = self.history[timestamp]
# See if a brotli versions exists and switch to that
if not filepath.endswith('.br') and os.path.isfile(f"{filepath}.br"):
filepath = f"{filepath}.br"
# OR in the backup case that the .br does not exist, but the plain one does
if filepath.endswith('.br') and not os.path.isfile(filepath):
if os.path.isfile(filepath.replace('.br', '')):
filepath = filepath.replace('.br', '')
if filepath.endswith('.br'):
# Brotli doesnt have a fileheader to detect it, so we rely on filename
# https://www.rfc-editor.org/rfc/rfc7932
with open(filepath, 'rb') as f:
return(brotli.decompress(f.read()).decode('utf-8'))
with open(filepath, 'r', encoding='utf-8', errors='ignore') as f:
return f.read()
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, contents, timestamp, snapshot_id):
import brotli
logger.trace(f"{self.get('uuid')} - Updating history.txt with timestamp {timestamp}")
self.ensure_data_dir_exists()
threshold = int(os.getenv('SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD', 1024))
skip_brotli = strtobool(os.getenv('DISABLE_BROTLI_TEXT_SNAPSHOT', 'False'))
if not skip_brotli and len(contents) > threshold:
snapshot_fname = f"{snapshot_id}.txt.br"
dest = os.path.join(self.watch_data_dir, snapshot_fname)
if not os.path.exists(dest):
with open(dest, 'wb') as f:
f.write(brotli.compress(contents.encode('utf-8'), mode=brotli.MODE_TEXT))
else:
snapshot_fname = f"{snapshot_id}.txt"
dest = os.path.join(self.watch_data_dir, snapshot_fname)
if not os.path.exists(dest):
with open(dest, 'wb') as f:
f.write(contents.encode('utf-8'))
# Append to index
# @todo check last char was \n
index_fname = os.path.join(self.watch_data_dir, "history.txt")
with open(index_fname, 'a') as f:
f.write("{},{}\n".format(timestamp, snapshot_fname))
f.close()
self.__newest_history_key = timestamp
self.__history_n += 1
# @todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
return snapshot_fname
@property
def has_empty_checktime(self):
# using all() + dictionary comprehension
# Check if all values are 0 in dictionary
res = all(x == None or x == False or x==0 for x in self.get('time_between_check', {}).values())
return res
def threshold_seconds(self):
seconds = 0
for m, n in mtable.items():
x = self.get('time_between_check', {}).get(m, None)
if x:
seconds += x * n
return seconds
# Iterate over all history texts and see if something new exists
# Always applying .strip() to start/end but optionally replace any other whitespace
def lines_contain_something_unique_compared_to_history(self, lines: list, ignore_whitespace=False):
local_lines = []
if lines:
if ignore_whitespace:
if isinstance(lines[0], str): # Can be either str or bytes depending on what was on the disk
local_lines = set([l.translate(TRANSLATE_WHITESPACE_TABLE).lower() for l in lines])
else:
local_lines = set([l.decode('utf-8').translate(TRANSLATE_WHITESPACE_TABLE).lower() for l in lines])
else:
if isinstance(lines[0], str): # Can be either str or bytes depending on what was on the disk
local_lines = set([l.strip().lower() for l in lines])
else:
local_lines = set([l.decode('utf-8').strip().lower() for l in lines])
# Compare each lines (set) against each history text file (set) looking for something new..
existing_history = set({})
for k, v in self.history.items():
content = self.get_history_snapshot(k)
if ignore_whitespace:
alist = set([line.translate(TRANSLATE_WHITESPACE_TABLE).lower() for line in content.splitlines()])
else:
alist = set([line.strip().lower() for line in content.splitlines()])
existing_history = existing_history.union(alist)
# Check that everything in local_lines(new stuff) already exists in existing_history - it should
# if not, something new happened
return not local_lines.issubset(existing_history)
def get_screenshot(self):
fname = os.path.join(self.watch_data_dir, "last-screenshot.png")
if os.path.isfile(fname):
return fname
# False is not an option for AppRise, must be type None
return None
def __get_file_ctime(self, filename):
fname = os.path.join(self.watch_data_dir, filename)
if os.path.isfile(fname):
return int(os.path.getmtime(fname))
return False
@property
def error_text_ctime(self):
return self.__get_file_ctime('last-error.txt')
@property
def snapshot_text_ctime(self):
if self.history_n==0:
return False
timestamp = list(self.history.keys())[-1]
return int(timestamp)
@property
def snapshot_screenshot_ctime(self):
return self.__get_file_ctime('last-screenshot.png')
@property
def snapshot_error_screenshot_ctime(self):
return self.__get_file_ctime('last-error-screenshot.png')
@property
def watch_data_dir(self):
# The base dir of the watch data
return os.path.join(self.__datastore_path, self['uuid']) if self.__datastore_path else None
def get_error_text(self):
"""Return the text saved from a previous request that resulted in a non-200 error"""
fname = os.path.join(self.watch_data_dir, "last-error.txt")
if os.path.isfile(fname):
with open(fname, 'r') as f:
return f.read()
return False
def get_error_snapshot(self):
"""Return path to the screenshot that resulted in a non-200 error"""
fname = os.path.join(self.watch_data_dir, "last-error-screenshot.png")
if os.path.isfile(fname):
return fname
return False
def pause(self):
self['paused'] = True
def unpause(self):
self['paused'] = False
def toggle_pause(self):
self['paused'] ^= True
def mute(self):
self['notification_muted'] = True
def unmute(self):
self['notification_muted'] = False
def toggle_mute(self):
self['notification_muted'] ^= True
def extra_notification_token_values(self):
# Used for providing extra tokens
# return {'widget': 555}
return {}
def extra_notification_token_placeholder_info(self):
# Used for providing extra tokens
# return [('widget', "Get widget amounts")]
return []
def extract_regex_from_all_history(self, regex):
import csv
import re
import datetime
csv_output_filename = False
csv_writer = False
f = None
# self.history will be keyed with the full path
for k, fname in self.history.items():
if os.path.isfile(fname):
if True:
contents = self.get_history_snapshot(k)
res = re.findall(regex, contents, re.MULTILINE)
if res:
if not csv_writer:
# A file on the disk can be transferred much faster via flask than a string reply
csv_output_filename = 'report.csv'
f = open(os.path.join(self.watch_data_dir, csv_output_filename), 'w')
# @todo some headers in the future
#fieldnames = ['Epoch seconds', 'Date']
csv_writer = csv.writer(f,
delimiter=',',
quotechar='"',
quoting=csv.QUOTE_MINIMAL,
#fieldnames=fieldnames
)
csv_writer.writerow(['Epoch seconds', 'Date'])
# csv_writer.writeheader()
date_str = datetime.datetime.fromtimestamp(int(k)).strftime('%Y-%m-%d %H:%M:%S')
for r in res:
row = [k, date_str]
if isinstance(r, str):
row.append(r)
else:
row+=r
csv_writer.writerow(row)
if f:
f.close()
return csv_output_filename
def has_special_diff_filter_options_set(self):
# All False - nothing would be done, so act like it's not processable
if not self.get('filter_text_added', True) and not self.get('filter_text_replaced', True) and not self.get('filter_text_removed', True):
return False
# Or one is set
if not self.get('filter_text_added', True) or not self.get('filter_text_replaced', True) or not self.get('filter_text_removed', True):
return True
# None is set
return False
def save_error_text(self, contents):
self.ensure_data_dir_exists()
target_path = os.path.join(self.watch_data_dir, "last-error.txt")
with open(target_path, 'w') as f:
f.write(contents)
def save_xpath_data(self, data, as_error=False):
import json
import zlib
if as_error:
target_path = os.path.join(str(self.watch_data_dir), "elements-error.deflate")
else:
target_path = os.path.join(str(self.watch_data_dir), "elements.deflate")
self.ensure_data_dir_exists()
with open(target_path, 'wb') as f:
f.write(zlib.compress(json.dumps(data).encode()))
f.close()
# Save as PNG, PNG is larger but better for doing visual diff in the future
def save_screenshot(self, screenshot: bytes, as_error=False):
if as_error:
target_path = os.path.join(self.watch_data_dir, "last-error-screenshot.png")
else:
target_path = os.path.join(self.watch_data_dir, "last-screenshot.png")
self.ensure_data_dir_exists()
with open(target_path, 'wb') as f:
f.write(screenshot)
f.close()
def get_last_fetched_text_before_filters(self):
import brotli
filepath = os.path.join(self.watch_data_dir, 'last-fetched.br')
if not os.path.isfile(filepath):
# If a previous attempt doesnt yet exist, just snarf the previous snapshot instead
dates = list(self.history.keys())
if len(dates):
return self.get_history_snapshot(dates[-1])
else:
return ''
with open(filepath, 'rb') as f:
return(brotli.decompress(f.read()).decode('utf-8'))
def save_last_text_fetched_before_filters(self, contents):
import brotli
filepath = os.path.join(self.watch_data_dir, 'last-fetched.br')
with open(filepath, 'wb') as f:
f.write(brotli.compress(contents, mode=brotli.MODE_TEXT))
def save_last_fetched_html(self, timestamp, contents):
import brotli
self.ensure_data_dir_exists()
snapshot_fname = f"{timestamp}.html.br"
filepath = os.path.join(self.watch_data_dir, snapshot_fname)
with open(filepath, 'wb') as f:
contents = contents.encode('utf-8') if isinstance(contents, str) else contents
try:
f.write(brotli.compress(contents))
except Exception as e:
logger.warning(f"{self.get('uuid')} - Unable to compress snapshot, saving as raw data to {filepath}")
logger.warning(e)
f.write(contents)
self._prune_last_fetched_html_snapshots()
def get_fetched_html(self, timestamp):
import brotli
snapshot_fname = f"{timestamp}.html.br"
filepath = os.path.join(self.watch_data_dir, snapshot_fname)
if os.path.isfile(filepath):
with open(filepath, 'rb') as f:
return (brotli.decompress(f.read()).decode('utf-8'))
return False
def _prune_last_fetched_html_snapshots(self):
dates = list(self.history.keys())
dates.reverse()
for index, timestamp in enumerate(dates):
snapshot_fname = f"{timestamp}.html.br"
filepath = os.path.join(self.watch_data_dir, snapshot_fname)
# Keep only the first 2
if index > 1 and os.path.isfile(filepath):
os.remove(filepath)
@property
def get_browsersteps_available_screenshots(self):
"For knowing which screenshots are available to show the user in BrowserSteps UI"
available = []
for f in Path(self.watch_data_dir).glob('step_before-*.jpeg'):
step_n=re.search(r'step_before-(\d+)', f.name)
if step_n:
available.append(step_n.group(1))
return available

View File

@@ -0,0 +1,135 @@
import os
import uuid
from changedetectionio import strtobool
from changedetectionio.notification import default_notification_format_for_watch
class watch_base(dict):
def __init__(self, *arg, **kw):
self.update({
# Custom notification content
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'body': None,
'browser_steps': [],
'browser_steps_last_error_step': None,
'check_count': 0,
'check_unique_lines': False, # On change-detected, compare against all history if its something new
'consecutive_filter_failures': 0, # Every time the CSS/xPath filter cannot be located, reset when all is fine.
'content-type': None,
'date_created': None,
'extract_text': [], # Extract text by regex after filters
'extract_title_as_title': False,
'fetch_backend': 'system', # plaintext, playwright etc
'fetch_time': 0.0,
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
'filter_text_added': True,
'filter_text_removed': True,
'filter_text_replaced': True,
'follow_price_changes': True,
'has_ldjson_price_data': None,
'headers': {}, # Extra headers to send
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
'in_stock_only': True, # Only trigger change on going to instock from out-of-stock
'include_filters': [],
'last_checked': 0,
'last_error': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'method': 'GET',
'notification_alert_count': 0,
'notification_body': None,
'notification_format': default_notification_format_for_watch,
'notification_muted': False,
'notification_screenshot': False, # Include the latest screenshot if available and supported by the apprise URL
'notification_title': None,
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'paused': False,
'previous_md5': False,
'previous_md5_before_filters': False, # Used for skipping changedetection entirely
'processor': 'text_json_diff', # could be restock_diff or others from .processors
'price_change_threshold_percent': None,
'proxy': None, # Preferred proxy connection
'remote_server_reply': None, # From 'server' reply header
'sort_text_alphabetically': False,
'subtractive_selectors': [],
'tag': '', # Old system of text name for a tag, to be removed
'tags': [], # list of UUIDs to App.Tags
'text_should_not_be_present': [], # Text that should not present
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'time_between_check_use_default': True,
"time_schedule_limit": {
"enabled": False,
"monday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"tuesday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"wednesday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"thursday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"friday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"saturday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
"sunday": {
"enabled": True,
"start_time": "00:00",
"duration": {
"hours": "24",
"minutes": "00"
}
},
},
'title': None,
'track_ldjson_price_data': None,
'trim_text_whitespace': False,
'remove_duplicate_lines': False,
'trigger_text': [], # List of text or regex to wait for until a change is detected
'url': '',
'uuid': str(uuid.uuid4()),
'webdriver_delay': None,
'webdriver_js_execute_code': None, # Run before change-detection
})
super(watch_base, self).__init__(*arg, **kw)
if self.get('default'):
del self['default']

View File

@@ -0,0 +1,228 @@
import time
from apprise import NotifyFormat
import apprise
from loguru import logger
from changedetectionio.html_tools import escape_mixed_content
valid_tokens = {
'base_url': '',
'current_snapshot': '',
'diff': '',
'diff_added': '',
'diff_full': '',
'diff_patch': '',
'diff_removed': '',
'diff_url': '',
'preview_url': '',
'triggered_text': '',
'watch_tag': '',
'watch_title': '',
'watch_url': '',
'watch_uuid': '',
}
default_notification_format_for_watch = 'System default'
default_notification_format = 'HTML Color'
default_notification_body = '{{watch_url}} had a change.\n---\n{{diff}}\n---\n'
default_notification_title = 'ChangeDetection.io Notification - {{watch_url}}'
valid_notification_formats = {
'Text': NotifyFormat.TEXT,
'Markdown': NotifyFormat.MARKDOWN,
'HTML': NotifyFormat.HTML,
'HTML Color': 'htmlcolor',
# Used only for editing a watch (not for global)
default_notification_format_for_watch: default_notification_format_for_watch
}
def process_notification(n_object, datastore):
# so that the custom endpoints are registered
from changedetectionio.apprise_plugin import apprise_custom_api_call_wrapper
from .safe_jinja import render as jinja_render
now = time.time()
if n_object.get('notification_timestamp'):
logger.trace(f"Time since queued {now-n_object['notification_timestamp']:.3f}s")
# Insert variables into the notification content
notification_parameters = create_notification_parameters(n_object, datastore)
n_format = valid_notification_formats.get(
n_object.get('notification_format', default_notification_format),
valid_notification_formats[default_notification_format],
)
# If we arrived with 'System default' then look it up
if n_format == default_notification_format_for_watch and datastore.data['settings']['application'].get('notification_format') != default_notification_format_for_watch:
# Initially text or whatever
n_format = datastore.data['settings']['application'].get('notification_format', valid_notification_formats[default_notification_format])
logger.trace(f"Complete notification body including Jinja and placeholders calculated in {time.time() - now:.3f}s")
# https://github.com/caronc/apprise/wiki/Development_LogCapture
# Anything higher than or equal to WARNING (which covers things like Connection errors)
# raise it as an exception
sent_objs = []
from .apprise_asset import asset
if 'as_async' in n_object:
asset.async_mode = n_object.get('as_async')
apobj = apprise.Apprise(debug=True, asset=asset)
if not n_object.get('notification_urls'):
return None
with apprise.LogCapture(level=apprise.logging.DEBUG) as logs:
for url in n_object['notification_urls']:
# Get the notification body from datastore
n_body = jinja_render(template_str=n_object.get('notification_body', ''), **notification_parameters)
if n_object.get('notification_format', '').startswith('HTML'):
n_body = n_body.replace("\n", '<br>')
n_title = jinja_render(template_str=n_object.get('notification_title', ''), **notification_parameters)
if n_object['notification_format'].startswith('HTML'):
n_body = escape_mixed_content(n_body)
url = url.strip()
if url.startswith('#'):
logger.trace(f"Skipping commented out notification URL - {url}")
continue
if not url:
logger.warning(f"Process Notification: skipping empty notification URL.")
continue
logger.info(f">> Process Notification: AppRise notifying {url}")
url = jinja_render(template_str=url, **notification_parameters)
# Re 323 - Limit discord length to their 2000 char limit total or it wont send.
# Because different notifications may require different pre-processing, run each sequentially :(
# 2000 bytes minus -
# 200 bytes for the overhead of the _entire_ json payload, 200 bytes for {tts, wait, content} etc headers
# Length of URL - Incase they specify a longer custom avatar_url
# So if no avatar_url is specified, add one so it can be correctly calculated into the total payload
k = '?' if not '?' in url else '&'
if not 'avatar_url' in url \
and not url.startswith('mail') \
and not url.startswith('post') \
and not url.startswith('get') \
and not url.startswith('delete') \
and not url.startswith('put'):
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
if url.startswith('tgram://'):
# Telegram only supports a limit subset of HTML, remove the '<br>' we place in.
# re https://github.com/dgtlmoon/changedetection.io/issues/555
# @todo re-use an existing library we have already imported to strip all non-allowed tags
n_body = n_body.replace('<br>', '\n')
n_body = n_body.replace('</br>', '\n')
# real limit is 4096, but minus some for extra metadata
payload_max_size = 3600
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('discord://') or url.startswith('https://discordapp.com/api/webhooks') or url.startswith(
'https://discord.com/api'):
# real limit is 2000, but minus some for extra metadata
payload_max_size = 1700
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('mailto'):
# Apprise will default to HTML, so we need to override it
# So that whats' generated in n_body is in line with what is going to be sent.
# https://github.com/caronc/apprise/issues/633#issuecomment-1191449321
if not 'format=' in url and (n_format == 'Text' or n_format == 'Markdown'):
prefix = '?' if not '?' in url else '&'
# Apprise format is lowercase text https://github.com/caronc/apprise/issues/633
n_format = n_format.lower()
url = f"{url}{prefix}format={n_format}"
# If n_format == HTML, then apprise email should default to text/html and we should be sending HTML only
apobj.add(url)
sent_objs.append({'title': n_title,
'body': n_body,
'url': url,
'body_format': n_format})
# Blast off the notifications tht are set in .add()
apobj.notify(
title=n_title,
body=n_body,
body_format=n_format,
# False is not an option for AppRise, must be type None
attach=n_object.get('screenshot', None)
)
# Returns empty string if nothing found, multi-line string otherwise
log_value = logs.getvalue()
if log_value and 'WARNING' in log_value or 'ERROR' in log_value:
logger.critical(log_value)
raise Exception(log_value)
# Return what was sent for better logging - after the for loop
return sent_objs
# Notification title + body content parameters get created here.
# ( Where we prepare the tokens in the notification to be replaced with actual values )
def create_notification_parameters(n_object, datastore):
from copy import deepcopy
# in the case we send a test notification from the main settings, there is no UUID.
uuid = n_object['uuid'] if 'uuid' in n_object else ''
if uuid:
watch_title = datastore.data['watching'][uuid].get('title', '')
tag_list = []
tags = datastore.get_all_tags_for_watch(uuid)
if tags:
for tag_uuid, tag in tags.items():
tag_list.append(tag.get('title'))
watch_tag = ', '.join(tag_list)
else:
watch_title = 'Change Detection'
watch_tag = ''
# Create URLs to customise the notification with
# active_base_url - set in store.py data property
base_url = datastore.data['settings']['application'].get('active_base_url')
watch_url = n_object['watch_url']
diff_url = "{}/diff/{}".format(base_url, uuid)
preview_url = "{}/preview/{}".format(base_url, uuid)
# Not sure deepcopy is needed here, but why not
tokens = deepcopy(valid_tokens)
# Valid_tokens also used as a field validator
tokens.update(
{
'base_url': base_url,
'diff_url': diff_url,
'preview_url': preview_url,
'watch_tag': watch_tag if watch_tag is not None else '',
'watch_title': watch_title if watch_title is not None else '',
'watch_url': watch_url,
'watch_uuid': uuid,
})
# n_object will contain diff, diff_added etc etc
tokens.update(n_object)
if uuid:
tokens.update(datastore.data['watching'].get(uuid).extra_notification_token_values())
return tokens

View File

@@ -0,0 +1,15 @@
# Change detection post-processors
The concept here is to be able to switch between different domain specific problems to solve.
- `text_json_diff` The traditional text and JSON comparison handler
- `restock_diff` Only cares about detecting if a product looks like it has some text that suggests that it's out of stock, otherwise assumes that it's in stock.
Some suggestions for the future
- `graphical`
## Todo
- Make each processor return a extra list of sub-processed (so you could configure a single processor in different ways)
- move restock_diff to its own pip/github repo

View File

@@ -0,0 +1,254 @@
from abc import abstractmethod
from changedetectionio.content_fetchers.base import Fetcher
from changedetectionio.strtobool import strtobool
from copy import deepcopy
from loguru import logger
import hashlib
import importlib
import inspect
import os
import pkgutil
import re
class difference_detection_processor():
browser_steps = None
datastore = None
fetcher = None
screenshot = None
watch = None
xpath_data = None
preferred_proxy = None
def __init__(self, *args, datastore, watch_uuid, **kwargs):
super().__init__(*args, **kwargs)
self.datastore = datastore
self.watch = deepcopy(self.datastore.data['watching'].get(watch_uuid))
# Generic fetcher that should be extended (requests, playwright etc)
self.fetcher = Fetcher()
def call_browser(self, preferred_proxy_id=None):
from requests.structures import CaseInsensitiveDict
url = self.watch.link
# Protect against file:, file:/, file:// access, check the real "link" without any meta "source:" etc prepended.
if re.search(r'^file:', url.strip(), re.IGNORECASE):
if not strtobool(os.getenv('ALLOW_FILE_URI', 'false')):
raise Exception(
"file:// type access is denied for security reasons."
)
# Requests, playwright, other browser via wss:// etc, fetch_extra_something
prefer_fetch_backend = self.watch.get('fetch_backend', 'system')
# Proxy ID "key"
preferred_proxy_id = preferred_proxy_id if preferred_proxy_id else self.datastore.get_preferred_proxy_for_watch(uuid=self.watch.get('uuid'))
# Pluggable content self.fetcher
if not prefer_fetch_backend or prefer_fetch_backend == 'system':
prefer_fetch_backend = self.datastore.data['settings']['application'].get('fetch_backend')
# In the case that the preferred fetcher was a browser config with custom connection URL..
# @todo - on save watch, if its extra_browser_ then it should be obvious it will use playwright (like if its requests now..)
custom_browser_connection_url = None
if prefer_fetch_backend.startswith('extra_browser_'):
(t, key) = prefer_fetch_backend.split('extra_browser_')
connection = list(
filter(lambda s: (s['browser_name'] == key), self.datastore.data['settings']['requests'].get('extra_browsers', [])))
if connection:
prefer_fetch_backend = 'html_webdriver'
custom_browser_connection_url = connection[0].get('browser_connection_url')
# PDF should be html_requests because playwright will serve it up (so far) in a embedded page
# @todo https://github.com/dgtlmoon/changedetection.io/issues/2019
# @todo needs test to or a fix
if self.watch.is_pdf:
prefer_fetch_backend = "html_requests"
# Grab the right kind of 'fetcher', (playwright, requests, etc)
from changedetectionio import content_fetchers
if hasattr(content_fetchers, prefer_fetch_backend):
# @todo TEMPORARY HACK - SWITCH BACK TO PLAYWRIGHT FOR BROWSERSTEPS
if prefer_fetch_backend == 'html_webdriver' and self.watch.has_browser_steps:
# This is never supported in selenium anyway
logger.warning("Using playwright fetcher override for possible puppeteer request in browsersteps, because puppetteer:browser steps is incomplete.")
from changedetectionio.content_fetchers.playwright import fetcher as playwright_fetcher
fetcher_obj = playwright_fetcher
else:
fetcher_obj = getattr(content_fetchers, prefer_fetch_backend)
else:
# What it referenced doesnt exist, Just use a default
fetcher_obj = getattr(content_fetchers, "html_requests")
proxy_url = None
if preferred_proxy_id:
# Custom browser endpoints should NOT have a proxy added
if not prefer_fetch_backend.startswith('extra_browser_'):
proxy_url = self.datastore.proxy_list.get(preferred_proxy_id).get('url')
logger.debug(f"Selected proxy key '{preferred_proxy_id}' as proxy URL '{proxy_url}' for {url}")
else:
logger.debug(f"Skipping adding proxy data when custom Browser endpoint is specified. ")
# Now call the fetcher (playwright/requests/etc) with arguments that only a fetcher would need.
# When browser_connection_url is None, it method should default to working out whats the best defaults (os env vars etc)
self.fetcher = fetcher_obj(proxy_override=proxy_url,
custom_browser_connection_url=custom_browser_connection_url
)
if self.watch.has_browser_steps:
self.fetcher.browser_steps = self.watch.get('browser_steps', [])
self.fetcher.browser_steps_screenshot_path = os.path.join(self.datastore.datastore_path, self.watch.get('uuid'))
# Tweak the base config with the per-watch ones
from changedetectionio.safe_jinja import render as jinja_render
request_headers = CaseInsensitiveDict()
ua = self.datastore.data['settings']['requests'].get('default_ua')
if ua and ua.get(prefer_fetch_backend):
request_headers.update({'User-Agent': ua.get(prefer_fetch_backend)})
request_headers.update(self.watch.get('headers', {}))
request_headers.update(self.datastore.get_all_base_headers())
request_headers.update(self.datastore.get_all_headers_in_textfile_for_watch(uuid=self.watch.get('uuid')))
# https://github.com/psf/requests/issues/4525
# Requests doesnt yet support brotli encoding, so don't put 'br' here, be totally sure that the user cannot
# do this by accident.
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
for header_name in request_headers:
request_headers.update({header_name: jinja_render(template_str=request_headers.get(header_name))})
timeout = self.datastore.data['settings']['requests'].get('timeout')
request_body = self.watch.get('body')
if request_body:
request_body = jinja_render(template_str=self.watch.get('body'))
request_method = self.watch.get('method')
ignore_status_codes = self.watch.get('ignore_status_codes', False)
# Configurable per-watch or global extra delay before extracting text (for webDriver types)
system_webdriver_delay = self.datastore.data['settings']['application'].get('webdriver_delay', None)
if self.watch.get('webdriver_delay'):
self.fetcher.render_extract_delay = self.watch.get('webdriver_delay')
elif system_webdriver_delay is not None:
self.fetcher.render_extract_delay = system_webdriver_delay
if self.watch.get('webdriver_js_execute_code') is not None and self.watch.get('webdriver_js_execute_code').strip():
self.fetcher.webdriver_js_execute_code = self.watch.get('webdriver_js_execute_code')
# Requests for PDF's, images etc should be passwd the is_binary flag
is_binary = self.watch.is_pdf
# And here we go! call the right browser with browser-specific settings
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
self.fetcher.run(url=url,
timeout=timeout,
request_headers=request_headers,
request_body=request_body,
request_method=request_method,
ignore_status_codes=ignore_status_codes,
current_include_filters=self.watch.get('include_filters'),
is_binary=is_binary,
empty_pages_are_a_change=empty_pages_are_a_change
)
#@todo .quit here could go on close object, so we can run JS if change-detected
self.fetcher.quit()
# After init, call run_changedetection() which will do the actual change-detection
@abstractmethod
def run_changedetection(self, watch):
update_obj = {'last_notification_error': False, 'last_error': False}
some_data = 'xxxxx'
update_obj["previous_md5"] = hashlib.md5(some_data.encode('utf-8')).hexdigest()
changed_detected = False
return changed_detected, update_obj, ''.encode('utf-8')
def find_sub_packages(package_name):
"""
Find all sub-packages within the given package.
:param package_name: The name of the base package to scan for sub-packages.
:return: A list of sub-package names.
"""
package = importlib.import_module(package_name)
return [name for _, name, is_pkg in pkgutil.iter_modules(package.__path__) if is_pkg]
def find_processors():
"""
Find all subclasses of DifferenceDetectionProcessor in the specified package.
:param package_name: The name of the package to scan for processor modules.
:return: A list of (module, class) tuples.
"""
package_name = "changedetectionio.processors" # Name of the current package/module
processors = []
sub_packages = find_sub_packages(package_name)
for sub_package in sub_packages:
module_name = f"{package_name}.{sub_package}.processor"
try:
module = importlib.import_module(module_name)
# Iterate through all classes in the module
for name, obj in inspect.getmembers(module, inspect.isclass):
if issubclass(obj, difference_detection_processor) and obj is not difference_detection_processor:
processors.append((module, sub_package))
except (ModuleNotFoundError, ImportError) as e:
logger.warning(f"Failed to import module {module_name}: {e} (find_processors())")
return processors
def get_parent_module(module):
module_name = module.__name__
if '.' not in module_name:
return None # Top-level module has no parent
parent_module_name = module_name.rsplit('.', 1)[0]
try:
return importlib.import_module(parent_module_name)
except Exception as e:
pass
return False
def get_custom_watch_obj_for_processor(processor_name):
from changedetectionio.model import Watch
watch_class = Watch.model
processor_classes = find_processors()
custom_watch_obj = next((tpl for tpl in processor_classes if tpl[1] == processor_name), None)
if custom_watch_obj:
# Parent of .processor.py COULD have its own Watch implementation
parent_module = get_parent_module(custom_watch_obj[0])
if hasattr(parent_module, 'Watch'):
watch_class = parent_module.Watch
return watch_class
def available_processors():
"""
Get a list of processors by name and description for the UI elements
:return: A list :)
"""
processor_classes = find_processors()
available = []
for package, processor_class in processor_classes:
available.append((processor_class, package.name))
return available

View File

@@ -0,0 +1,10 @@
class ProcessorException(Exception):
def __init__(self, message=None, status_code=None, url=None, screenshot=None, has_filters=False, html_content='', xpath_data=None):
self.message = message
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.has_filters = has_filters
self.html_content = html_content
self.xpath_data = xpath_data
return

View File

@@ -0,0 +1,84 @@
from babel.numbers import parse_decimal
from changedetectionio.model.Watch import model as BaseWatch
from typing import Union
import re
class Restock(dict):
def parse_currency(self, raw_value: str) -> Union[float, None]:
# Clean and standardize the value (ie 1,400.00 should be 1400.00), even better would be store the whole thing as an integer.
standardized_value = raw_value
if ',' in standardized_value and '.' in standardized_value:
# Identify the correct decimal separator
if standardized_value.rfind('.') > standardized_value.rfind(','):
standardized_value = standardized_value.replace(',', '')
else:
standardized_value = standardized_value.replace('.', '').replace(',', '.')
else:
standardized_value = standardized_value.replace(',', '.')
# Remove any non-numeric characters except for the decimal point
standardized_value = re.sub(r'[^\d.-]', '', standardized_value)
if standardized_value:
# Convert to float
return float(parse_decimal(standardized_value, locale='en'))
return None
def __init__(self, *args, **kwargs):
# Define default values
default_values = {
'in_stock': None,
'price': None,
'currency': None,
'original_price': None
}
# Initialize the dictionary with default values
super().__init__(default_values)
# Update with any provided positional arguments (dictionaries)
if args:
if len(args) == 1 and isinstance(args[0], dict):
self.update(args[0])
else:
raise ValueError("Only one positional argument of type 'dict' is allowed")
def __setitem__(self, key, value):
# Custom logic to handle setting price and original_price
if key == 'price' or key == 'original_price':
if isinstance(value, str):
value = self.parse_currency(raw_value=value)
super().__setitem__(key, value)
class Watch(BaseWatch):
def __init__(self, *arg, **kw):
super().__init__(*arg, **kw)
self['restock'] = Restock(kw['default']['restock']) if kw.get('default') and kw['default'].get('restock') else Restock()
self['restock_settings'] = kw['default']['restock_settings'] if kw.get('default',{}).get('restock_settings') else {
'follow_price_changes': True,
'in_stock_processing' : 'in_stock_only'
} #@todo update
def clear_watch(self):
super().clear_watch()
self.update({'restock': Restock()})
def extra_notification_token_values(self):
values = super().extra_notification_token_values()
values['restock'] = self.get('restock', {})
return values
def extra_notification_token_placeholder_info(self):
values = super().extra_notification_token_placeholder_info()
values.append(('restock.price', "Price detected"))
values.append(('restock.original_price', "Original price at first check"))
return values

View File

@@ -0,0 +1,81 @@
from wtforms import (
BooleanField,
validators,
FloatField
)
from wtforms.fields.choices import RadioField
from wtforms.fields.form import FormField
from wtforms.form import Form
from changedetectionio.forms import processor_text_json_diff_form
class RestockSettingsForm(Form):
in_stock_processing = RadioField(label='Re-stock detection', choices=[
('in_stock_only', "In Stock only (Out Of Stock -> In Stock only)"),
('all_changes', "Any availability changes"),
('off', "Off, don't follow availability/restock"),
], default="in_stock_only")
price_change_min = FloatField('Below price to trigger notification', [validators.Optional()],
render_kw={"placeholder": "No limit", "size": "10"})
price_change_max = FloatField('Above price to trigger notification', [validators.Optional()],
render_kw={"placeholder": "No limit", "size": "10"})
price_change_threshold_percent = FloatField('Threshold in % for price changes since the original price', validators=[
validators.Optional(),
validators.NumberRange(min=0, max=100, message="Should be between 0 and 100"),
], render_kw={"placeholder": "0%", "size": "5"})
follow_price_changes = BooleanField('Follow price changes', default=True)
class processor_settings_form(processor_text_json_diff_form):
restock_settings = FormField(RestockSettingsForm)
def extra_tab_content(self):
return 'Restock & Price Detection'
def extra_form_content(self):
output = ""
if getattr(self, 'watch', None) and getattr(self, 'datastore'):
for tag_uuid in self.watch.get('tags'):
tag = self.datastore.data['settings']['application']['tags'].get(tag_uuid, {})
if tag.get('overrides_watch'):
# @todo - Quick and dirty, cant access 'url_for' here because its out of scope somehow
output = f"""<p><strong>Note! A Group tag overrides the restock and price detection here.</strong></p><style>#restock-fieldset-price-group {{ opacity: 0.6; }}</style>"""
output += """
{% from '_helpers.html' import render_field, render_checkbox_field, render_button %}
<script>
$(document).ready(function () {
toggleOpacity('#restock_settings-follow_price_changes', '.price-change-minmax', true);
});
</script>
<fieldset id="restock-fieldset-price-group">
<div class="pure-control-group">
<fieldset class="pure-group inline-radio">
{{ render_field(form.restock_settings.in_stock_processing) }}
</fieldset>
<fieldset class="pure-group">
{{ render_checkbox_field(form.restock_settings.follow_price_changes) }}
<span class="pure-form-message-inline">Changes in price should trigger a notification</span>
</fieldset>
<fieldset class="pure-group price-change-minmax">
{{ render_field(form.restock_settings.price_change_min, placeholder=watch.get('restock', {}).get('price')) }}
<span class="pure-form-message-inline">Minimum amount, Trigger a change/notification when the price drops <i>below</i> this value.</span>
</fieldset>
<fieldset class="pure-group price-change-minmax">
{{ render_field(form.restock_settings.price_change_max, placeholder=watch.get('restock', {}).get('price')) }}
<span class="pure-form-message-inline">Maximum amount, Trigger a change/notification when the price rises <i>above</i> this value.</span>
</fieldset>
<fieldset class="pure-group price-change-minmax">
{{ render_field(form.restock_settings.price_change_threshold_percent) }}
<span class="pure-form-message-inline">Price must change more than this % to trigger a change since the first check.</span><br>
<span class="pure-form-message-inline">For example, If the product is $1,000 USD originally, <strong>2%</strong> would mean it has to change more than $20 since the first check.</span><br>
</fieldset>
</div>
</fieldset>
"""
return output

View File

@@ -0,0 +1,314 @@
from .. import difference_detection_processor
from ..exceptions import ProcessorException
from . import Restock
from loguru import logger
import urllib3
import time
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
name = 'Re-stock & Price detection for single product pages'
description = 'Detects if the product goes back to in-stock'
class UnableToExtractRestockData(Exception):
def __init__(self, status_code):
# Set this so we can use it in other parts of the app
self.status_code = status_code
return
class MoreThanOnePriceFound(Exception):
def __init__(self):
return
def _search_prop_by_value(matches, value):
for properties in matches:
for prop in properties:
if value in prop[0]:
return prop[1] # Yield the desired value and exit the function
def _deduplicate_prices(data):
import re
'''
Some price data has multiple entries, OR it has a single entry with ['$159', '159', 159, "$ 159"] or just "159"
Get all the values, clean it and add it to a set then return the unique values
'''
unique_data = set()
# Return the complete 'datum' where its price was not seen before
for datum in data:
if isinstance(datum.value, list):
# Process each item in the list
normalized_value = set([float(re.sub(r'[^\d.]', '', str(item))) for item in datum.value if str(item).strip()])
unique_data.update(normalized_value)
else:
# Process single value
v = float(re.sub(r'[^\d.]', '', str(datum.value)))
unique_data.add(v)
return list(unique_data)
# should return Restock()
# add casting?
def get_itemprop_availability(html_content) -> Restock:
"""
Kind of funny/cool way to find price/availability in one many different possibilities.
Use 'extruct' to find any possible RDFa/microdata/json-ld data, make a JSON string from the output then search it.
"""
from jsonpath_ng import parse
import re
now = time.time()
import extruct
logger.trace(f"Imported extruct module in {time.time() - now:.3f}s")
now = time.time()
# Extruct is very slow, I'm wondering if some ML is going to be faster (800ms on my i7), 'rdfa' seems to be the heaviest.
syntaxes = ['dublincore', 'json-ld', 'microdata', 'microformat', 'opengraph']
try:
data = extruct.extract(html_content, syntaxes=syntaxes)
except Exception as e:
logger.warning(f"Unable to extract data, document parsing with extruct failed with {type(e).__name__} - {str(e)}")
return Restock()
logger.trace(f"Extruct basic extract of all metadata done in {time.time() - now:.3f}s")
# First phase, dead simple scanning of anything that looks useful
value = Restock()
if data:
logger.debug(f"Using jsonpath to find price/availability/etc")
price_parse = parse('$..(price|Price)')
pricecurrency_parse = parse('$..(pricecurrency|currency|priceCurrency )')
availability_parse = parse('$..(availability|Availability)')
price_result = _deduplicate_prices(price_parse.find(data))
if price_result:
# Right now, we just support single product items, maybe we will store the whole actual metadata seperately in teh future and
# parse that for the UI?
if len(price_result) > 1 and len(price_result) > 1:
# See of all prices are different, in the case that one product has many embedded data types with the same price
# One might have $121.95 and another 121.95 etc
logger.warning(f"More than one price found {price_result}, throwing exception, cant use this plugin.")
raise MoreThanOnePriceFound()
value['price'] = price_result[0]
pricecurrency_result = pricecurrency_parse.find(data)
if pricecurrency_result:
value['currency'] = pricecurrency_result[0].value
availability_result = availability_parse.find(data)
if availability_result:
value['availability'] = availability_result[0].value
if value.get('availability'):
value['availability'] = re.sub(r'(?i)^(https|http)://schema.org/', '',
value.get('availability').strip(' "\'').lower()) if value.get('availability') else None
# Second, go dig OpenGraph which is something that jsonpath_ng cant do because of the tuples and double-dots (:)
if not value.get('price') or value.get('availability'):
logger.debug(f"Alternatively digging through OpenGraph properties for restock/price info..")
jsonpath_expr = parse('$..properties')
for match in jsonpath_expr.find(data):
if not value.get('price'):
value['price'] = _search_prop_by_value([match.value], "price:amount")
if not value.get('availability'):
value['availability'] = _search_prop_by_value([match.value], "product:availability")
if not value.get('currency'):
value['currency'] = _search_prop_by_value([match.value], "price:currency")
logger.trace(f"Processed with Extruct in {time.time()-now:.3f}s")
return value
def is_between(number, lower=None, upper=None):
"""
Check if a number is between two values.
Parameters:
number (float): The number to check.
lower (float or None): The lower bound (inclusive). If None, no lower bound.
upper (float or None): The upper bound (inclusive). If None, no upper bound.
Returns:
bool: True if the number is between the lower and upper bounds, False otherwise.
"""
return (lower is None or lower <= number) and (upper is None or number <= upper)
class perform_site_check(difference_detection_processor):
screenshot = None
xpath_data = None
def run_changedetection(self, watch):
import hashlib
if not watch:
raise Exception("Watch no longer exists.")
# Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False, 'restock': Restock()}
self.screenshot = self.fetcher.screenshot
self.xpath_data = self.fetcher.xpath_data
# Track the content type
update_obj['content_type'] = self.fetcher.headers.get('Content-Type', '')
update_obj["last_check_status"] = self.fetcher.get_last_status_code()
# Only try to process restock information (like scraping for keywords) if the page was actually rendered correctly.
# Otherwise it will assume "in stock" because nothing suggesting the opposite was found
from ...html_tools import html_to_text
text = html_to_text(self.fetcher.content)
logger.debug(f"Length of text after conversion: {len(text)}")
if not len(text):
from ...content_fetchers.exceptions import ReplyWithContentButNoText
raise ReplyWithContentButNoText(url=watch.link,
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
html_content=self.fetcher.content,
xpath_data=self.fetcher.xpath_data
)
# Which restock settings to compare against?
restock_settings = watch.get('restock_settings', {})
# See if any tags have 'activate for individual watches in this tag/group?' enabled and use the first we find
for tag_uuid in watch.get('tags'):
tag = self.datastore.data['settings']['application']['tags'].get(tag_uuid, {})
if tag.get('overrides_watch'):
restock_settings = tag.get('restock_settings', {})
logger.info(f"Watch {watch.get('uuid')} - Tag '{tag.get('title')}' selected for restock settings override")
break
itemprop_availability = {}
try:
itemprop_availability = get_itemprop_availability(self.fetcher.content)
except MoreThanOnePriceFound as e:
# Add the real data
raise ProcessorException(message="Cannot run, more than one price detected, this plugin is only for product pages with ONE product, try the content-change detection mode.",
url=watch.get('url'),
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
xpath_data=self.fetcher.xpath_data
)
# Something valid in get_itemprop_availability() by scraping metadata ?
if itemprop_availability.get('price') or itemprop_availability.get('availability'):
# Store for other usage
update_obj['restock'] = itemprop_availability
if itemprop_availability.get('availability'):
# @todo: Configurable?
if any(substring.lower() in itemprop_availability['availability'].lower() for substring in [
'instock',
'instoreonly',
'limitedavailability',
'onlineonly',
'presale']
):
update_obj['restock']['in_stock'] = True
else:
update_obj['restock']['in_stock'] = False
# Main detection method
fetched_md5 = None
# store original price if not set
if itemprop_availability and itemprop_availability.get('price') and not itemprop_availability.get('original_price'):
itemprop_availability['original_price'] = itemprop_availability.get('price')
update_obj['restock']["original_price"] = itemprop_availability.get('price')
if not self.fetcher.instock_data and not itemprop_availability.get('availability') and not itemprop_availability.get('price'):
raise ProcessorException(
message=f"Unable to extract restock data for this page unfortunately. (Got code {self.fetcher.get_last_status_code()} from server), no embedded stock information was found and nothing interesting in the text, try using this watch with Chrome.",
url=watch.get('url'),
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
xpath_data=self.fetcher.xpath_data
)
logger.debug(f"self.fetcher.instock_data is - '{self.fetcher.instock_data}' and itemprop_availability.get('availability') is {itemprop_availability.get('availability')}")
# Nothing automatic in microdata found, revert to scraping the page
if self.fetcher.instock_data and itemprop_availability.get('availability') is None:
# 'Possibly in stock' comes from stock-not-in-stock.js when no string found above the fold.
# Careful! this does not really come from chrome/js when the watch is set to plaintext
update_obj['restock']["in_stock"] = True if self.fetcher.instock_data == 'Possibly in stock' else False
logger.debug(f"Watch UUID {watch.get('uuid')} restock check returned instock_data - '{self.fetcher.instock_data}' from JS scraper.")
# Very often websites will lie about the 'availability' in the metadata, so if the scraped version says its NOT in stock, use that.
if self.fetcher.instock_data and self.fetcher.instock_data != 'Possibly in stock':
if update_obj['restock'].get('in_stock'):
logger.warning(
f"Lie detected in the availability machine data!! when scraping said its not in stock!! itemprop was '{itemprop_availability}' and scraped from browser was '{self.fetcher.instock_data}' update obj was {update_obj['restock']} ")
logger.warning(f"Setting instock to FALSE, scraper found '{self.fetcher.instock_data}' in the body but metadata reported not-in-stock")
update_obj['restock']["in_stock"] = False
# What we store in the snapshot
price = update_obj.get('restock').get('price') if update_obj.get('restock').get('price') else ""
snapshot_content = f"In Stock: {update_obj.get('restock').get('in_stock')} - Price: {price}"
# Main detection method
fetched_md5 = hashlib.md5(snapshot_content.encode('utf-8')).hexdigest()
# The main thing that all this at the moment comes down to :)
changed_detected = False
logger.debug(f"Watch UUID {watch.get('uuid')} restock check - Previous MD5: {watch.get('previous_md5')}, Fetched MD5 {fetched_md5}")
# out of stock -> back in stock only?
if watch.get('restock') and watch['restock'].get('in_stock') != update_obj['restock'].get('in_stock'):
# Yes if we only care about it going to instock, AND we are in stock
if restock_settings.get('in_stock_processing') == 'in_stock_only' and update_obj['restock']['in_stock']:
changed_detected = True
if restock_settings.get('in_stock_processing') == 'all_changes':
# All cases
changed_detected = True
if restock_settings.get('follow_price_changes') and watch.get('restock') and update_obj.get('restock') and update_obj['restock'].get('price'):
price = float(update_obj['restock'].get('price'))
# Default to current price if no previous price found
if watch['restock'].get('original_price'):
previous_price = float(watch['restock'].get('original_price'))
# It was different, but negate it further down
if price != previous_price:
changed_detected = True
# Minimum/maximum price limit
if update_obj.get('restock') and update_obj['restock'].get('price'):
logger.debug(
f"{watch.get('uuid')} - Change was detected, 'price_change_max' is '{restock_settings.get('price_change_max', '')}' 'price_change_min' is '{restock_settings.get('price_change_min', '')}', price from website is '{update_obj['restock'].get('price', '')}'.")
if update_obj['restock'].get('price'):
min_limit = float(restock_settings.get('price_change_min')) if restock_settings.get('price_change_min') else None
max_limit = float(restock_settings.get('price_change_max')) if restock_settings.get('price_change_max') else None
price = float(update_obj['restock'].get('price'))
logger.debug(f"{watch.get('uuid')} after float conversion - Min limit: '{min_limit}' Max limit: '{max_limit}' Price: '{price}'")
if min_limit or max_limit:
if is_between(number=price, lower=min_limit, upper=max_limit):
# Price was between min/max limit, so there was nothing todo in any case
logger.trace(f"{watch.get('uuid')} {price} is between {min_limit} and {max_limit}, nothing to check, forcing changed_detected = False (was {changed_detected})")
changed_detected = False
else:
logger.trace(f"{watch.get('uuid')} {price} is between {min_limit} and {max_limit}, continuing normal comparison")
# Price comparison by %
if watch['restock'].get('original_price') and changed_detected and restock_settings.get('price_change_threshold_percent'):
previous_price = float(watch['restock'].get('original_price'))
pc = float(restock_settings.get('price_change_threshold_percent'))
change = abs((price - previous_price) / previous_price * 100)
if change and change <= pc:
logger.debug(f"{watch.get('uuid')} Override change-detected to FALSE because % threshold ({pc}%) was {change:.3f}%")
changed_detected = False
else:
logger.debug(f"{watch.get('uuid')} Price change was {change:.3f}% , (threshold {pc}%)")
# Always record the new checksum
update_obj["previous_md5"] = fetched_md5
return changed_detected, update_obj, snapshot_content.strip()

View File

@@ -0,0 +1,115 @@
from loguru import logger
def _task(watch, update_handler):
from changedetectionio.content_fetchers.exceptions import ReplyWithContentButNoText
from changedetectionio.processors.text_json_diff.processor import FilterNotFoundInResponse
text_after_filter = ''
try:
# The slow process (we run 2 of these in parallel)
changed_detected, update_obj, text_after_filter = update_handler.run_changedetection(watch=watch)
except FilterNotFoundInResponse as e:
text_after_filter = f"Filter not found in HTML: {str(e)}"
except ReplyWithContentButNoText as e:
text_after_filter = f"Filter found but no text (empty result)"
except Exception as e:
text_after_filter = f"Error: {str(e)}"
if not text_after_filter.strip():
text_after_filter = 'Empty content'
# because run_changedetection always returns bytes due to saving the snapshots etc
text_after_filter = text_after_filter.decode('utf-8') if isinstance(text_after_filter, bytes) else text_after_filter
return text_after_filter
def prepare_filter_prevew(datastore, watch_uuid):
'''Used by @app.route("/edit/<string:uuid>/preview-rendered", methods=['POST'])'''
from changedetectionio import forms, html_tools
from changedetectionio.model.Watch import model as watch_model
from concurrent.futures import ProcessPoolExecutor
from copy import deepcopy
from flask import request, jsonify
import brotli
import importlib
import os
import time
now = time.time()
text_after_filter = ''
text_before_filter = ''
trigger_line_numbers = []
ignore_line_numbers = []
tmp_watch = deepcopy(datastore.data['watching'].get(watch_uuid))
if tmp_watch and tmp_watch.history and os.path.isdir(tmp_watch.watch_data_dir):
# Splice in the temporary stuff from the form
form = forms.processor_text_json_diff_form(formdata=request.form if request.method == 'POST' else None,
data=request.form
)
# Only update vars that came in via the AJAX post
p = {k: v for k, v in form.data.items() if k in request.form.keys()}
tmp_watch.update(p)
blank_watch_no_filters = watch_model()
blank_watch_no_filters['url'] = tmp_watch.get('url')
latest_filename = next(reversed(tmp_watch.history))
html_fname = os.path.join(tmp_watch.watch_data_dir, f"{latest_filename}.html.br")
with open(html_fname, 'rb') as f:
decompressed_data = brotli.decompress(f.read()).decode('utf-8') if html_fname.endswith('.br') else f.read().decode('utf-8')
# Just like a normal change detection except provide a fake "watch" object and dont call .call_browser()
processor_module = importlib.import_module("changedetectionio.processors.text_json_diff.processor")
update_handler = processor_module.perform_site_check(datastore=datastore,
watch_uuid=tmp_watch.get('uuid') # probably not needed anymore anyway?
)
# Use the last loaded HTML as the input
update_handler.datastore = datastore
update_handler.fetcher.content = str(decompressed_data) # str() because playwright/puppeteer/requests return string
update_handler.fetcher.headers['content-type'] = tmp_watch.get('content-type')
# Process our watch with filters and the HTML from disk, and also a blank watch with no filters but also with the same HTML from disk
# Do this as a parallel process because it could take some time
with ProcessPoolExecutor(max_workers=2) as executor:
future1 = executor.submit(_task, tmp_watch, update_handler)
future2 = executor.submit(_task, blank_watch_no_filters, update_handler)
text_after_filter = future1.result()
text_before_filter = future2.result()
try:
trigger_line_numbers = html_tools.strip_ignore_text(content=text_after_filter,
wordlist=tmp_watch['trigger_text'],
mode='line numbers'
)
except Exception as e:
text_before_filter = f"Error: {str(e)}"
try:
text_to_ignore = tmp_watch.get('ignore_text', []) + datastore.data['settings']['application'].get('global_ignore_text', [])
ignore_line_numbers = html_tools.strip_ignore_text(content=text_after_filter,
wordlist=text_to_ignore,
mode='line numbers'
)
except Exception as e:
text_before_filter = f"Error: {str(e)}"
logger.trace(f"Parsed in {time.time() - now:.3f}s")
return jsonify(
{
'after_filter': text_after_filter,
'before_filter': text_before_filter.decode('utf-8') if isinstance(text_before_filter, bytes) else text_before_filter,
'duration': time.time() - now,
'trigger_line_numbers': trigger_line_numbers,
'ignore_line_numbers': ignore_line_numbers,
}
)

View File

@@ -0,0 +1,370 @@
# HTML to TEXT/JSON DIFFERENCE self.fetcher
import hashlib
import json
import os
import re
import urllib3
from changedetectionio.processors import difference_detection_processor
from changedetectionio.html_tools import PERL_STYLE_REGEX, cdata_in_document_to_text, TRANSLATE_WHITESPACE_TABLE
from changedetectionio import html_tools, content_fetchers
from changedetectionio.blueprint.price_data_follower import PRICE_DATA_TRACK_ACCEPT, PRICE_DATA_TRACK_REJECT
from loguru import logger
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
name = 'Webpage Text/HTML, JSON and PDF changes'
description = 'Detects all text changes where possible'
json_filter_prefixes = ['json:', 'jq:', 'jqraw:']
class FilterNotFoundInResponse(ValueError):
def __init__(self, msg, screenshot=None, xpath_data=None):
self.screenshot = screenshot
self.xpath_data = xpath_data
ValueError.__init__(self, msg)
class PDFToHTMLToolNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Some common stuff here that can be moved to a base class
# (set_proxy_from_list)
class perform_site_check(difference_detection_processor):
def run_changedetection(self, watch):
changed_detected = False
html_content = ""
screenshot = False # as bytes
stripped_text_from_html = ""
if not watch:
raise Exception("Watch no longer exists.")
# Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False}
url = watch.link
self.screenshot = self.fetcher.screenshot
self.xpath_data = self.fetcher.xpath_data
# Track the content type
update_obj['content_type'] = self.fetcher.get_all_headers().get('content-type', '').lower()
# Watches added automatically in the queue manager will skip if its the same checksum as the previous run
# Saves a lot of CPU
update_obj['previous_md5_before_filters'] = hashlib.md5(self.fetcher.content.encode('utf-8')).hexdigest()
# Fetching complete, now filters
# @note: I feel like the following should be in a more obvious chain system
# - Check filter text
# - Is the checksum different?
# - Do we convert to JSON?
# https://stackoverflow.com/questions/41817578/basic-method-chaining ?
# return content().textfilter().jsonextract().checksumcompare() ?
is_json = 'application/json' in self.fetcher.get_all_headers().get('content-type', '').lower()
is_html = not is_json
is_rss = False
ctype_header = self.fetcher.get_all_headers().get('content-type', '').lower()
# Go into RSS preprocess for converting CDATA/comment to usable text
if any(substring in ctype_header for substring in ['application/xml', 'application/rss', 'text/xml']):
if '<rss' in self.fetcher.content[:100].lower():
self.fetcher.content = cdata_in_document_to_text(html_content=self.fetcher.content)
is_rss = True
# source: support, basically treat it as plaintext
if watch.is_source_type_url:
is_html = False
is_json = False
inline_pdf = self.fetcher.get_all_headers().get('content-disposition', '') and '%PDF-1' in self.fetcher.content[:10]
if watch.is_pdf or 'application/pdf' in self.fetcher.get_all_headers().get('content-type', '').lower() or inline_pdf:
from shutil import which
tool = os.getenv("PDF_TO_HTML_TOOL", "pdftohtml")
if not which(tool):
raise PDFToHTMLToolNotFound("Command-line `{}` tool was not found in system PATH, was it installed?".format(tool))
import subprocess
proc = subprocess.Popen(
[tool, '-stdout', '-', '-s', 'out.pdf', '-i'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
proc.stdin.write(self.fetcher.raw_content)
proc.stdin.close()
self.fetcher.content = proc.stdout.read().decode('utf-8')
proc.wait(timeout=60)
# Add a little metadata so we know if the file changes (like if an image changes, but the text is the same
# @todo may cause problems with non-UTF8?
metadata = "<p>Added by changedetection.io: Document checksum - {} Filesize - {} bytes</p>".format(
hashlib.md5(self.fetcher.raw_content).hexdigest().upper(),
len(self.fetcher.content))
self.fetcher.content = self.fetcher.content.replace('</body>', metadata + '</body>')
# Better would be if Watch.model could access the global data also
# and then use getattr https://docs.python.org/3/reference/datamodel.html#object.__getitem__
# https://realpython.com/inherit-python-dict/ instead of doing it procedurely
include_filters_from_tags = self.datastore.get_tag_overrides_for_watch(uuid=watch.get('uuid'), attr='include_filters')
# 1845 - remove duplicated filters in both group and watch include filter
include_filters_rule = list(dict.fromkeys(watch.get('include_filters', []) + include_filters_from_tags))
subtractive_selectors = [*self.datastore.get_tag_overrides_for_watch(uuid=watch.get('uuid'), attr='subtractive_selectors'),
*watch.get("subtractive_selectors", []),
*self.datastore.data["settings"]["application"].get("global_subtractive_selectors", [])
]
# Inject a virtual LD+JSON price tracker rule
if watch.get('track_ldjson_price_data', '') == PRICE_DATA_TRACK_ACCEPT:
include_filters_rule += html_tools.LD_JSON_PRODUCT_OFFER_SELECTORS
has_filter_rule = len(include_filters_rule) and len(include_filters_rule[0].strip())
has_subtractive_selectors = len(subtractive_selectors) and len(subtractive_selectors[0].strip())
if is_json and not has_filter_rule:
include_filters_rule.append("json:$")
has_filter_rule = True
if is_json:
# Sort the JSON so we dont get false alerts when the content is just re-ordered
try:
self.fetcher.content = json.dumps(json.loads(self.fetcher.content), sort_keys=True)
except Exception as e:
# Might have just been a snippet, or otherwise bad JSON, continue
pass
if has_filter_rule:
for filter in include_filters_rule:
if any(prefix in filter for prefix in json_filter_prefixes):
stripped_text_from_html += html_tools.extract_json_as_string(content=self.fetcher.content, json_filter=filter)
is_html = False
if is_html or watch.is_source_type_url:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
self.fetcher.content = html_tools.workarounds_for_obfuscations(self.fetcher.content)
html_content = self.fetcher.content
# If not JSON, and if it's not text/plain..
if 'text/plain' in self.fetcher.get_all_headers().get('content-type', '').lower():
# Don't run get_text or xpath/css filters on plaintext
stripped_text_from_html = html_content
else:
# Does it have some ld+json price data? used for easier monitoring
update_obj['has_ldjson_price_data'] = html_tools.has_ldjson_product_info(self.fetcher.content)
# Then we assume HTML
if has_filter_rule:
html_content = ""
for filter_rule in include_filters_rule:
# For HTML/XML we offer xpath as an option, just start a regular xPath "/.."
if filter_rule[0] == '/' or filter_rule.startswith('xpath:'):
html_content += html_tools.xpath_filter(xpath_filter=filter_rule.replace('xpath:', ''),
html_content=self.fetcher.content,
append_pretty_line_formatting=not watch.is_source_type_url,
is_rss=is_rss)
elif filter_rule.startswith('xpath1:'):
html_content += html_tools.xpath1_filter(xpath_filter=filter_rule.replace('xpath1:', ''),
html_content=self.fetcher.content,
append_pretty_line_formatting=not watch.is_source_type_url,
is_rss=is_rss)
else:
html_content += html_tools.include_filters(include_filters=filter_rule,
html_content=self.fetcher.content,
append_pretty_line_formatting=not watch.is_source_type_url)
if not html_content.strip():
raise FilterNotFoundInResponse(msg=include_filters_rule, screenshot=self.fetcher.screenshot, xpath_data=self.fetcher.xpath_data)
if has_subtractive_selectors:
html_content = html_tools.element_removal(subtractive_selectors, html_content)
if watch.is_source_type_url:
stripped_text_from_html = html_content
else:
# extract text
do_anchor = self.datastore.data["settings"]["application"].get("render_anchor_tag_content", False)
stripped_text_from_html = html_tools.html_to_text(html_content=html_content,
render_anchor_tag_content=do_anchor,
is_rss=is_rss) # 1874 activate the <title workaround hack
if watch.get('trim_text_whitespace'):
stripped_text_from_html = '\n'.join(line.strip() for line in stripped_text_from_html.replace("\n\n", "\n").splitlines())
# Re #340 - return the content before the 'ignore text' was applied
# Also used to calculate/show what was removed
text_content_before_ignored_filter = stripped_text_from_html
# @todo whitespace coming from missing rtrim()?
# stripped_text_from_html could be based on their preferences, replace the processed text with only that which they want to know about.
# Rewrite's the processing text based on only what diff result they want to see
if watch.has_special_diff_filter_options_set() and len(watch.history.keys()):
# Now the content comes from the diff-parser and not the returned HTTP traffic, so could be some differences
from changedetectionio import diff
# needs to not include (added) etc or it may get used twice
# Replace the processed text with the preferred result
rendered_diff = diff.render_diff(previous_version_file_contents=watch.get_last_fetched_text_before_filters(),
newest_version_file_contents=stripped_text_from_html,
include_equal=False, # not the same lines
include_added=watch.get('filter_text_added', True),
include_removed=watch.get('filter_text_removed', True),
include_replaced=watch.get('filter_text_replaced', True),
line_feed_sep="\n",
include_change_type_prefix=False)
watch.save_last_text_fetched_before_filters(text_content_before_ignored_filter.encode('utf-8'))
if not rendered_diff and stripped_text_from_html:
# We had some content, but no differences were found
# Store our new file as the MD5 so it will trigger in the future
c = hashlib.md5(stripped_text_from_html.translate(TRANSLATE_WHITESPACE_TABLE).encode('utf-8')).hexdigest()
return False, {'previous_md5': c}, stripped_text_from_html.encode('utf-8')
else:
stripped_text_from_html = rendered_diff
# Treat pages with no renderable text content as a change? No by default
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0:
raise content_fetchers.exceptions.ReplyWithContentButNoText(url=url,
status_code=self.fetcher.get_last_status_code(),
screenshot=self.fetcher.screenshot,
has_filters=has_filter_rule,
html_content=html_content,
xpath_data=self.fetcher.xpath_data
)
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
update_obj["last_check_status"] = self.fetcher.get_last_status_code()
# 615 Extract text by regex
extract_text = watch.get('extract_text', [])
if len(extract_text) > 0:
regex_matched_output = []
for s_re in extract_text:
# incase they specified something in '/.../x'
if re.search(PERL_STYLE_REGEX, s_re, re.IGNORECASE):
regex = html_tools.perl_style_slash_enclosed_regex_to_options(s_re)
result = re.findall(regex, stripped_text_from_html)
for l in result:
if type(l) is tuple:
# @todo - some formatter option default (between groups)
regex_matched_output += list(l) + ['\n']
else:
# @todo - some formatter option default (between each ungrouped result)
regex_matched_output += [l] + ['\n']
else:
# Doesnt look like regex, just hunt for plaintext and return that which matches
# `stripped_text_from_html` will be bytes, so we must encode s_re also to bytes
r = re.compile(re.escape(s_re), re.IGNORECASE)
res = r.findall(stripped_text_from_html)
if res:
for match in res:
regex_matched_output += [match] + ['\n']
##########################################################
stripped_text_from_html = ''
if regex_matched_output:
# @todo some formatter for presentation?
stripped_text_from_html = ''.join(regex_matched_output)
if watch.get('remove_duplicate_lines'):
stripped_text_from_html = '\n'.join(dict.fromkeys(line for line in stripped_text_from_html.replace("\n\n", "\n").splitlines()))
if watch.get('sort_text_alphabetically'):
# Note: Because a <p>something</p> will add an extra line feed to signify the paragraph gap
# we end up with 'Some text\n\n', sorting will add all those extra \n at the start, so we remove them here.
stripped_text_from_html = stripped_text_from_html.replace("\n\n", "\n")
stripped_text_from_html = '\n'.join(sorted(stripped_text_from_html.splitlines(), key=lambda x: x.lower()))
### CALCULATE MD5
# If there's text to ignore
text_to_ignore = watch.get('ignore_text', []) + self.datastore.data['settings']['application'].get('global_ignore_text', [])
text_for_checksuming = stripped_text_from_html
if text_to_ignore:
text_for_checksuming = html_tools.strip_ignore_text(stripped_text_from_html, text_to_ignore)
# Re #133 - if we should strip whitespaces from triggering the change detected comparison
if text_for_checksuming and self.datastore.data['settings']['application'].get('ignore_whitespace', False):
fetched_md5 = hashlib.md5(text_for_checksuming.translate(TRANSLATE_WHITESPACE_TABLE).encode('utf-8')).hexdigest()
else:
fetched_md5 = hashlib.md5(text_for_checksuming.encode('utf-8')).hexdigest()
############ Blocking rules, after checksum #################
blocked = False
trigger_text = watch.get('trigger_text', [])
if len(trigger_text):
# Assume blocked
blocked = True
# Filter and trigger works the same, so reuse it
# It should return the line numbers that match
# Unblock flow if the trigger was found (some text remained after stripped what didnt match)
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=trigger_text,
mode="line numbers")
# Unblock if the trigger was found
if result:
blocked = False
text_should_not_be_present = watch.get('text_should_not_be_present', [])
if len(text_should_not_be_present):
# If anything matched, then we should block a change from happening
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=text_should_not_be_present,
mode="line numbers")
if result:
blocked = True
# Looks like something changed, but did it match all the rules?
if blocked:
changed_detected = False
else:
# The main thing that all this at the moment comes down to :)
if watch.get('previous_md5') != fetched_md5:
changed_detected = True
# Always record the new checksum
update_obj["previous_md5"] = fetched_md5
# On the first run of a site, watch['previous_md5'] will be None, set it the current one.
if not watch.get('previous_md5'):
watch['previous_md5'] = fetched_md5
logger.debug(f"Watch UUID {watch.get('uuid')} content check - Previous MD5: {watch.get('previous_md5')}, Fetched MD5 {fetched_md5}")
if changed_detected:
if watch.get('check_unique_lines', False):
ignore_whitespace = self.datastore.data['settings']['application'].get('ignore_whitespace')
has_unique_lines = watch.lines_contain_something_unique_compared_to_history(
lines=stripped_text_from_html.splitlines(),
ignore_whitespace=ignore_whitespace
)
# One or more lines? unsure?
if not has_unique_lines:
logger.debug(f"check_unique_lines: UUID {watch.get('uuid')} didnt have anything new setting change_detected=False")
changed_detected = False
else:
logger.debug(f"check_unique_lines: UUID {watch.get('uuid')} had unique content")
# stripped_text_from_html - Everything after filters and NO 'ignored' content
return changed_detected, update_obj, stripped_text_from_html

View File

@@ -0,0 +1,12 @@
[pytest]
addopts = --no-start-live-server --live-server-port=5005
#testpaths = tests pytest_invenio
#live_server_scope = function
filterwarnings =
ignore::DeprecationWarning:urllib3.*:
; logging options
log_cli = 1
log_cli_level = DEBUG
log_cli_format = %(asctime)s %(name)s: %(levelname)s %(message)s

View File

@@ -0,0 +1,10 @@
from dataclasses import dataclass, field
from typing import Any
# So that we can queue some metadata in `item`
# https://docs.python.org/3/library/queue.html#queue.PriorityQueue
#
@dataclass(order=True)
class PrioritizedItem:
priority: int
item: Any=field(compare=False)

View File

@@ -0,0 +1,42 @@
#!/bin/bash
# live_server will throw errors even with live_server_scope=function if I have the live_server setup in different functions
# and I like to restart the server for each test (and have the test cleanup after each test)
# merge request welcome :)
# exit when any command fails
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
find tests/test_*py -type f|while read test_name
do
echo "TEST RUNNING $test_name"
pytest $test_name
done
echo "RUNNING WITH BASE_URL SET"
# Now re-run some tests with BASE_URL enabled
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py
# Re-run with HIDE_REFERER set - could affect login
export HIDE_REFERER=True
pytest tests/test_access_control.py
# Re-run a few tests that will trigger brotli based storage
export SNAPSHOT_BROTLI_COMPRESSION_THRESHOLD=5
pytest tests/test_access_control.py
pytest tests/test_notification.py
pytest tests/test_backend.py
pytest tests/test_rss.py
pytest tests/test_unique_lines.py
# Check file:// will pickup a file when enabled
echo "Hello world" > /tmp/test-file.txt
ALLOW_FILE_URI=yes pytest tests/test_security.py

View File

@@ -0,0 +1,46 @@
#!/bin/bash
# run some tests and look if the 'custom-browser-search-string=1' connect string appeared in the correct containers
# @todo do it again but with the puppeteer one
# enable debug
set -x
# A extra browser is configured, but we never chose to use it, so it should NOT show in the logs
docker run --rm -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/custom_browser_url/test_custom_browser_url.py::test_request_not_via_custom_browser_url'
docker logs sockpuppetbrowser-custom-url &>log-custom.txt
grep 'custom-browser-search-string=1' log-custom.txt
if [ $? -ne 1 ]
then
echo "Saw a request in 'sockpuppetbrowser-custom-url' container with 'custom-browser-search-string=1' when I should not - log-custom.txt"
exit 1
fi
docker logs sockpuppetbrowser &>log.txt
grep 'custom-browser-search-string=1' log.txt
if [ $? -ne 1 ]
then
echo "Saw a request in 'browser' container with 'custom-browser-search-string=1' when I should not"
exit 1
fi
# Special connect string should appear in the custom-url container, but not in the 'default' one
docker run --rm -e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/custom_browser_url/test_custom_browser_url.py::test_request_via_custom_browser_url'
docker logs sockpuppetbrowser-custom-url &>log-custom.txt
grep 'custom-browser-search-string=1' log-custom.txt
if [ $? -ne 0 ]
then
echo "Did not see request in 'sockpuppetbrowser-custom-url' container with 'custom-browser-search-string=1' when I should - log-custom.txt"
exit 1
fi
docker logs sockpuppetbrowser &>log.txt
grep 'custom-browser-search-string=1' log.txt
if [ $? -ne 1 ]
then
echo "Saw a request in 'browser' container with 'custom-browser-search-string=1' when I should not"
exit 1
fi

View File

@@ -0,0 +1,84 @@
#!/bin/bash
# exit when any command fails
set -e
# enable debug
set -x
# Test proxy list handling, starting two squids on different ports
# Each squid adds a different header to the response, which is the main thing we test for.
docker run --network changedet-network -d --name squid-one --hostname squid-one --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
docker run --network changedet-network -d --name squid-two --hostname squid-two --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
# Used for configuring a custom proxy URL via the UI - with username+password auth
docker run --network changedet-network -d \
--name squid-custom \
--hostname squid-custom \
--rm \
-v `pwd`/tests/proxy_list/squid-auth.conf:/etc/squid/conf.d/debian.conf \
-v `pwd`/tests/proxy_list/squid-passwords.txt:/etc/squid3/passwords \
ubuntu/squid:4.13-21.10_edge
## 2nd test actually choose the preferred proxy from proxies.json
docker run --network changedet-network \
-v `pwd`/tests/proxy_list/proxies.json-example:/app/changedetectionio/test-datastore/proxies.json \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_multiple_proxy.py'
set +e
echo "- Looking for chosen.changedetection.io request in squid-one - it should NOT be here"
docker logs squid-one 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 1 ]
then
echo "Saw a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid one) WHEN I SHOULD NOT"
exit 1
fi
set -e
echo "- Looking for chosen.changedetection.io request in squid-two"
# And one in the 'second' squid (user selects this as preferred)
docker logs squid-two 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
# Test the UI configurable proxies
docker run --network changedet-network \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_select_custom_proxy.py'
# Should see a request for one.changedetection.io in there
echo "- Looking for .changedetection.io request in squid-custom"
docker logs squid-custom 2>/dev/null|grep "TCP_TUNNEL.200.*changedetection.io"
if [ $? -ne 0 ]
then
echo "Did not see a valid request to changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
# Test "no-proxy" option
docker run --network changedet-network \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_noproxy.py'
# We need to handle grep returning 1
set +e
# Check request was never seen in any container
for c in $(echo "squid-one squid-two squid-custom"); do
echo ....Checking $c
docker logs $c &> $c.txt
grep noproxy $c.txt
if [ $? -ne 1 ]
then
echo "Saw request for noproxy in $c container"
cat $c.txt
exit 1
fi
done
docker kill squid-one squid-two squid-custom

View File

@@ -0,0 +1,49 @@
#!/bin/bash
# exit when any command fails
set -e
# enable debug
set -x
# SOCKS5 related - start simple Socks5 proxy server
# SOCKSTEST=xyz should show in the logs of this service to confirm it fetched
docker run --network changedet-network -d --hostname socks5proxy --rm --name socks5proxy -p 1080:1080 -e PROXY_USER=proxy_user123 -e PROXY_PASSWORD=proxy_pass123 serjs/go-socks5-proxy
docker run --network changedet-network -d --hostname socks5proxy-noauth --rm -p 1081:1080 --name socks5proxy-noauth serjs/go-socks5-proxy
echo "---------------------------------- SOCKS5 -------------------"
# SOCKS5 related - test from proxies.json
docker run --network changedet-network \
-v `pwd`/tests/proxy_socks5/proxies.json-example:/app/changedetectionio/test-datastore/proxies.json \
--rm \
-e "FLASK_SERVER_NAME=cdio" \
--hostname cdio \
-e "SOCKSTEST=proxiesjson" \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest --live-server-host=0.0.0.0 --live-server-port=5004 -s tests/proxy_socks5/test_socks5_proxy_sources.py'
# SOCKS5 related - by manually entering in UI
docker run --network changedet-network \
--rm \
-e "FLASK_SERVER_NAME=cdio" \
--hostname cdio \
-e "SOCKSTEST=manual" \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest --live-server-host=0.0.0.0 --live-server-port=5004 -s tests/proxy_socks5/test_socks5_proxy.py'
# SOCKS5 related - test from proxies.json via playwright - NOTE- PLAYWRIGHT DOESNT SUPPORT AUTHENTICATING PROXY
docker run --network changedet-network \
-e "SOCKSTEST=manual-playwright" \
--hostname cdio \
-e "FLASK_SERVER_NAME=cdio" \
-v `pwd`/tests/proxy_socks5/proxies.json-example-noauth:/app/changedetectionio/test-datastore/proxies.json \
-e "PLAYWRIGHT_DRIVER_URL=ws://sockpuppetbrowser:3000" \
--rm \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest --live-server-host=0.0.0.0 --live-server-port=5004 -s tests/proxy_socks5/test_socks5_proxy_sources.py'
echo "socks5 server logs"
docker logs socks5proxy
echo "----------------------------------"
docker kill socks5proxy socks5proxy-noauth

View File

@@ -0,0 +1,18 @@
"""
Safe Jinja2 render with max payload sizes
See https://jinja.palletsprojects.com/en/3.1.x/sandbox/#security-considerations
"""
import jinja2.sandbox
import typing as t
import os
JINJA2_MAX_RETURN_PAYLOAD_SIZE = 1024 * int(os.getenv("JINJA2_MAX_RETURN_PAYLOAD_SIZE_KB", 1024 * 10))
def render(template_str, **args: t.Any) -> str:
jinja2_env = jinja2.sandbox.ImmutableSandboxedEnvironment(extensions=['jinja2_time.TimeExtension'])
output = jinja2_env.from_string(template_str).render(args)
return output[:JINJA2_MAX_RETURN_PAYLOAD_SIZE]

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

Some files were not shown because too many files have changed in this diff Show More