Compare commits

...

876 Commits

Author SHA1 Message Date
dgtlmoon
ef857b3c51 Re #1392 - Adjust PDF detect rules 2023-02-07 17:21:37 +01:00
dgtlmoon
0f43213d9d UI - preview page - Fix bug where playwright/chrome was system default and [preview] didnt show snapshot 2023-02-07 16:55:34 +01:00
dgtlmoon
93c57d9fad Adding example docker-compose.yml config to ignore errors from self-signed certs #1389 2023-02-06 17:24:12 +01:00
dgtlmoon
3cdd075baf 0.40.2 2023-02-03 19:20:13 +01:00
dgtlmoon
5c617e8530 Code cleanup - remove unused import 2023-02-03 18:35:58 +01:00
dgtlmoon
1a48965ba1 UI fix - Fix logic for showing screenshot on diff page (#1379) 2023-02-03 11:23:48 +01:00
dgtlmoon
41856c4ed8 Re #1365 - Playwright - Browser "Service Workers" should be enabled by default but unset via env var PLAYWRIGHT_SERVICE_WORKERS=block (#1367) 2023-02-01 20:50:40 +01:00
dgtlmoon
0ed897c50f New setting to allow passwordless access to your 'diff' page - perfect for sharing your diff page securely, refactored login code (#1357) 2023-01-29 22:36:55 +01:00
dgtlmoon
f8e587c415 Security - Possible stored XSS in watch list - Only permit HTTP/HTTP/FTP by default - override with env var SAFE_PROTOCOL_REGEX (#1359) 2023-01-29 11:12:06 +01:00
dgtlmoon
d47a25eb6d Playwright - Removing old bug fix where playwright needed screenshot called twice to make the full screen screenshot be actually fullscreen (#1356) 2023-01-28 15:02:53 +01:00
dgtlmoon
9a0792d185 Fetch backend UI default fixes for VisualSelector and BrowserSteps (#1344) 2023-01-25 19:47:54 +01:00
dgtlmoon
948ef7ade4 Fix fetch UI default fetch backend option icon (#1343) 2023-01-25 18:07:44 +01:00
dgtlmoon
0ba139f8f9 Docker container build - docker container buildx version change causing errors with watchtower and others (#1336) 2023-01-24 23:45:43 +01:00
dgtlmoon
a9431191fc 0.40.1.1 2023-01-22 13:03:15 +01:00
dgtlmoon
774451f256 Re #1328 - add -6 flag to enable IPv6 (#1329) 2023-01-22 11:10:25 +01:00
dgtlmoon
04577cbf32 0.40.1.0 2023-01-21 15:38:54 +01:00
dgtlmoon
f2864af8f1 Update README.md 2023-01-21 14:02:14 +01:00
dgtlmoon
9a36d081c4 Setting docker-compose.yml version to 3.2 so it works with portainer and others #1306 #1144 #1079 2023-01-21 13:50:36 +01:00
dgtlmoon
7048a0acbd UI - Fix wrong logic when dealing with webdriver/playwright watch screenshot settings (#1325) 2023-01-21 13:47:32 +01:00
dgtlmoon
fba719ab8d Ability for watch to use a more obvious system default fetcher (#1320) 2023-01-19 21:57:58 +01:00
dgtlmoon
7c5e2d00af Update README.md 2023-01-17 22:02:51 +01:00
dgtlmoon
02b8fc0c18 pip - eventlet doesnt support dnspython >=2.3.0 (Fixes build error) 2023-01-17 22:01:56 +01:00
dgtlmoon
de15dfd80d Reliability fix - Remove loop that could cause app to stop checking if data changes (#1313) 2023-01-15 16:12:47 +01:00
dgtlmoon
024c8d8fd5 API - Improvements, support PUT for updating existing watch, set muted state, set paused state, see https://changedetection.io/docs/api_v1/index.html (#1213) 2023-01-10 19:00:57 +01:00
dgtlmoon
fab7d325f7 Data storage - Don't recreate DB if its corrupt, exit with error cleanly so operator can look into the problem (#1296) 2023-01-08 14:47:31 +01:00
jtagcat
58c7cbeac7 UI: Updating queued success message (#1285) 2023-01-05 21:12:02 +01:00
Abhishek Malani
ab9efdfd14 README.md - Fix release link (#1277) 2022-12-29 11:06:51 +01:00
Hmmbob
65d5a5d34c Notifications: updating apprise (slack notification fixes and others) (#1272) 2022-12-28 18:34:55 +01:00
dgtlmoon
93c157ee7f Remove docker-compose version so it works on any modern version #1144 (#1268) 2022-12-26 20:37:31 +01:00
Bill Metangmo
de85db887c Update the docker compose file to any version (#1079) (#1144) 2022-12-26 20:36:42 +01:00
dgtlmoon
50805ca38a IPv6 support for listening on (#1267) 2022-12-26 20:36:16 +01:00
dgtlmoon
fc6424c39e Test improvements (#1264) 2022-12-26 14:17:40 +01:00
dgtlmoon
f0966eb23a 0.40.0.4 2022-12-25 18:25:45 +01:00
dgtlmoon
e4fb5ab4da UI - Suggest adding proxy for watch when 403 access denied is reached (#1260) 2022-12-23 22:26:24 +01:00
dgtlmoon
e99f07a51d Filters & Notifications - fixed tokens in filter not found notification 2022-12-22 10:05:17 +01:00
dgtlmoon
08ee223b5f UI - Fix broken html tags in settings page 2022-12-20 18:57:26 +01:00
dgtlmoon
572f9b8a31 Proxy Settings in UI - TidyUp BrightData text 2022-12-20 10:08:16 +01:00
dgtlmoon
fcfd1b5e10 Ability to configure extra proxies via the UI (#1235) 2022-12-19 21:48:01 +01:00
dgtlmoon
0790dd555e Docker container updates - use Python 3.10, remove unused packages 2022-12-19 20:46:02 +01:00
dgtlmoon
0b20dc7712 Tidy up list icons a bit (#1250) 2022-12-19 20:30:32 +01:00
dgtlmoon
13c4121f52 PDF File change detection - Initial PDF fetcher support with basic text extraction (#1244) 2022-12-19 17:51:41 +01:00
dgtlmoon
e8e176f3bd Testing - Run test as fully built docker container (#1245) 2022-12-19 14:41:34 +01:00
dgtlmoon
7a1d2d924e Dark mode - system setting var is not required (its cookie based) 2022-12-19 14:13:57 +01:00
dgtlmoon
c3731cf055 0.40.0.3 2022-12-19 12:41:52 +01:00
dgtlmoon
a287e5a86c Visual Selector - Select smallest/most precise element first, better filtering of zero size elements 2022-12-19 12:33:31 +01:00
dgtlmoon
235535c327 Fetching - Check the most overdue watch first (#1242) 2022-12-17 15:40:57 +01:00
dgtlmoon
44dc62da2d Overview list - Checkbox action "Recheck" 2022-12-16 18:35:09 +01:00
dgtlmoon
0c380c170f Playwright - Better error reporting and re-try fetch on fail once (#1238) 2022-12-16 18:06:14 +01:00
dgtlmoon
b7a2501d64 Fetching - Always sort the key order of JSON content for less false alerts (May cause an alert on upgrade, but will be better going forwards) #1219 2022-12-15 09:13:09 +01:00
dgtlmoon
e970fef991 Fetcher + VisualSelector - xPath filter with attribute filter was breaking the element finder 2022-12-14 19:06:49 +01:00
dgtlmoon
b76148a0f4 Fetcher - CPU usage - Skip processing if the previous checksum and the just fetched one was the same (#925) 2022-12-14 15:08:34 +01:00
dgtlmoon
93cc30437f Playwright+BrowserSteps - Fetch changes - Fetch simply after page starts rendering + delay seconds, disable service workers 2022-12-14 12:16:04 +01:00
dgtlmoon
6562d6e0d4 Improve ARM/rust build comment 2022-12-13 12:28:20 +01:00
dgtlmoon
6c217cc3b6 README.md - Improving JSONPath example for LD+JSON product data 2022-12-11 11:14:52 +01:00
dgtlmoon
f30cdf0674 0.40.0.2 2022-12-08 22:36:59 +01:00
dgtlmoon
14da0646a7 Price follower - Dont scan for ldjson data when 'no' was clicked on the suggestion (#1207) 2022-12-08 22:35:37 +01:00
dgtlmoon
b413cdecc7 Adding missing parts for pip build Re #1206 2022-12-08 21:54:55 +01:00
dgtlmoon
7bf52d9275 0.40.0 2022-12-08 20:09:42 +01:00
dgtlmoon
09e6624afd VisualSelector - Exclude items that are not interactable or visible 2022-12-08 20:08:41 +01:00
dgtlmoon
b58fd995b5 Automatically offer to track LD+JSON product price data (#1204) 2022-12-08 19:28:20 +01:00
dgtlmoon
f7bb8a0afa UI - favicon callback no longer needed 2022-12-07 12:14:36 +01:00
dgtlmoon
3e333496c1 Test cleanups (#1196) 2022-12-07 12:03:28 +01:00
Amro Hendawi
ee776a9627 Update runtime.txt (#1198) 2022-12-07 00:17:58 +01:00
dgtlmoon
65db4d68e3 Dark mode - HTML template tidy up (#1197) 2022-12-06 23:50:49 +01:00
dgtlmoon
74d93d10c3 UI - watch tags also known as watch tag / label 2022-12-06 23:16:22 +01:00
dgtlmoon
37aef0530a Notification templates - bug in update, was updating the main system instead of the watch notification_title incorrectly 2022-12-06 18:29:09 +01:00
dgtlmoon
f86763dc7a Extract data - minor improvement to example 2022-12-06 10:53:23 +01:00
dgtlmoon
13c25f9b92 Darkmode - Pause/Mute notification colour fix, re #1195 2022-12-06 10:49:24 +01:00
dgtlmoon
265f622e75 Notification - Support for standard API calls post:// posts:// get:// gets:// delete:// deletes:// put:// puts:// (#1194) 2022-12-05 20:49:08 +01:00
dgtlmoon
c12db2b725 Notifications - tokens/jinja2 templating (#1184) 2022-12-05 19:58:43 +01:00
dgtlmoon
a048e4a02d Dark mode - more colour fixes 2022-12-05 19:10:36 +01:00
dgtlmoon
69662ff91c Test improvement - improving notification error network test 2022-12-05 17:45:30 +01:00
dgtlmoon
fc94c57d7f Extract text as CSV - Extra validation (#1192) 2022-12-05 16:36:00 +01:00
dgtlmoon
7b94ba6f23 Dark mode - make watch list easier to read when theres 'unviewed' entries 2022-12-05 15:13:47 +01:00
dgtlmoon
2345b6b558 New feature - Simple extract data by regex from all historical watch text into CSV (#1191) 2022-12-05 14:48:03 +01:00
dgtlmoon
b8d5a12ad0 UI - Cursor over labels should be pointer 2022-12-05 10:42:48 +01:00
dgtlmoon
9e67a572c5 Dark mode - Make watches with errors easier to read 2022-12-05 09:53:53 +01:00
dgtlmoon
378d7b7362 Dark mode - cookie path should be all site 2022-12-04 20:54:15 +01:00
dgtlmoon
d1d4045c49 Tweaks - adding hover/title to dark mode button 2022-12-04 18:53:56 +01:00
dgtlmoon
77409eeb3a UI - Dark Mode (#1187) 2022-12-04 16:39:25 +01:00
peppetemp
87726e0bb2 docker-compose - Add playwright/selenium container dependencies example (#1178) 2022-12-02 16:13:59 +01:00
dgtlmoon
72222158e9 BrowserSteps - Can be shared by the watch share link 2022-12-02 09:36:13 +01:00
dgtlmoon
1814924c19 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-12-01 23:48:04 +01:00
dgtlmoon
8aae4197d7 UI - Make tabs hoverable 2022-12-01 23:47:51 +01:00
dgtlmoon
3a8a41a3ff Favicon multiplatform and path fix/update (#1176) 2022-12-01 23:29:53 +01:00
dgtlmoon
64caeea491 BrowserSteps - Cleanup interface on shutdown 2022-12-01 23:28:20 +01:00
dgtlmoon
3838bff397 BrowserSteps - More work on cleaner shutdowns of browser session 2022-12-01 23:08:28 +01:00
dgtlmoon
55ea983bda BrowserSteps - Forcefully shutdown playwright to prevent any race-conditions waiting for it to shutdown 2022-12-01 19:32:05 +01:00
dgtlmoon
b4d79839bf BrowserSteps - Make the UI require an extra step so it doesnt slow down the experience when clicking through the tabs (#1175) 2022-11-30 19:40:15 +01:00
dgtlmoon
0b8c3add34 BrowserSteps - Use correct mimetype for screenshot update 2022-11-29 14:07:53 +01:00
dgtlmoon
51d57f0963 BrowserSteps - Faster screenshot updates and enable gzip compression for all content replies in the UI (#1171) 2022-11-29 13:55:53 +01:00
dgtlmoon
6d932149e3 BrowserSteps - Add 'Execute JS' step 2022-11-29 09:09:26 +01:00
dgtlmoon
2c764e8f84 BrowserSteps - Also try to find clickable div/spans 2022-11-29 08:46:11 +01:00
dgtlmoon
07765b0d38 Update README.md 2022-11-28 20:55:18 +01:00
dgtlmoon
7c3faa8e38 Update README.md 2022-11-28 19:24:10 +01:00
dgtlmoon
4624974b91 BrowserSteps - Element finder filter (offpage) should also calculate top scroll offset 2022-11-28 18:04:02 +01:00
dgtlmoon
991841f1f9 Visual Selector and BrowserSteps - More accurate element detection when the page auto-scrolls on load Re #1169 2022-11-28 17:31:50 +01:00
dgtlmoon
e3db324698 Extra validation for URLs with template markup (#1166) 2022-11-27 16:18:11 +01:00
dgtlmoon
0988bef2cd Browser Steps - adding 'please wait' text while loading 2022-11-27 11:41:41 +01:00
dgtlmoon
5b281f2c34 Re #1163 psutil missing from pip requirements 2022-11-26 00:32:57 +01:00
dgtlmoon
a224f64cd6 Update README.md 2022-11-25 11:16:02 +01:00
dgtlmoon
7ee97ae37f Update README.md 2022-11-25 11:14:29 +01:00
dgtlmoon
69756f20f2 VisualSelector & BrowserSteps - Scraper improvements, remove duplicate code 2022-11-25 10:45:38 +01:00
dgtlmoon
326b7aacbb Bumping VisualSelector example animation 2022-11-25 10:02:18 +01:00
dgtlmoon
fde7b3fd97 Remove dupe xpath finder prep code 2022-11-25 09:25:05 +01:00
dgtlmoon
9d04cb014a Browsersteps 'Beta' label image path fix 2022-11-25 09:14:19 +01:00
dgtlmoon
5b530ff61c Configurable "Browser Steps" when Playwright/Chrome is configured (enter text, scroll, wait for text, click button etc) (#478) 2022-11-24 20:53:01 +01:00
Maeglin
c98536ace4 Update README.md - Make docker instructions easier to follow on Windows (#1158) 2022-11-23 14:42:36 +01:00
dgtlmoon
463747d3b7 0.39.22.1 2022-11-22 18:09:25 +01:00
dgtlmoon
791bdb42aa Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-11-22 18:09:03 +01:00
dgtlmoon
ce6c2737a8 Notification screenshot/JPEG was not being regenerated correctly (#1149) 2022-11-22 18:08:46 +01:00
dgtlmoon
ade9e1138b Re #1148 - Notification screenshot/JPEG was not being regenerated correctly 2022-11-22 17:41:06 +01:00
dgtlmoon
68d5178367 Update README.md 2022-11-21 00:24:15 +01:00
dgtlmoon
41dc57aee3 Update README.md 2022-11-21 00:20:55 +01:00
dgtlmoon
943704cd04 0.39.22 2022-11-20 16:29:16 +01:00
dgtlmoon
883561f979 Fix dangling HTML tag from screenshot notification 2022-11-20 16:04:26 +01:00
dgtlmoon
35d44c8277 Notification screenshot option should only be available to webdriver/playwright watches, screenshot sent as JPEG to save bandwidth, Simplify the logic around screenshot, (#1140) 2022-11-20 14:40:41 +01:00
dgtlmoon
d07d7a1b18 Minor test improvements 2022-11-20 11:35:35 +01:00
Matthias Bilger
f066a1c38f Option to attach screenshot to notification (#1127) 2022-11-20 09:37:48 +01:00
dgtlmoon
d0d191a7d1 VisualFilter - check previously set filters were set before highlighting 2022-11-19 17:37:51 +01:00
dgtlmoon
d7482c8d6a Add diff view option for JSON compare (comparing the fields defined on each. The order of fields, etc does not matter in this comparison.) 2022-11-19 15:17:09 +01:00
dgtlmoon
bcf7417f63 Update visual text difference library, add option to ignore whitespace when viewing diff (#1137) 2022-11-19 15:08:27 +01:00
dgtlmoon
df6e835035 Make VisualSelector show first available multiple selector, refactor to make more maintainable (#1132) 2022-11-17 11:52:48 +01:00
dgtlmoon
ab28f20eba Make link to notification debug log easier to find (#1130) 2022-11-16 09:17:57 +01:00
Hmmbob
1174b95ab4 Bump notification library (#1128) 2022-11-15 22:54:12 +01:00
dgtlmoon
a564475325 Re #1126 HIDE_REFERER setting had wrong default 2022-11-14 10:28:05 +01:00
dgtlmoon
85d8d57997 Test: Re-test under HIDE_REFERER condition, use strtobool so you can use 'False' (#1121) 2022-11-12 13:57:41 +01:00
dgtlmoon
359dcb63e3 Stability fix related to the new watch check count (#1113) 2022-11-10 20:01:07 +01:00
dgtlmoon
b043d477dc Use deepcopy to stop possible data corruption (#1108) 2022-11-08 12:18:38 +01:00
dgtlmoon
06bcfb28e5 Code- Use dict .get instead of key 2022-11-07 20:43:20 +01:00
dgtlmoon
ca3b351bae Adding a check counter to watch fetching (#1099) 2022-11-06 09:48:07 +01:00
dgtlmoon
b7e0f0a5e4 Update README.md 2022-11-05 12:22:52 +01:00
dgtlmoon
61f0ac2937 HIDE_REFERER incompatible with password based login, added comment to code #996 2022-11-04 23:46:03 +01:00
dgtlmoon
fca66eb558 Update README.md 2022-11-03 14:29:38 +01:00
dgtlmoon
359fc48fb4 Filters can now accept a list/multiple filters (#1064) #623 2022-11-03 12:13:54 +01:00
dgtlmoon
d0efeb9770 0.39.21.1 2022-11-02 23:48:10 +01:00
dgtlmoon
3416532cd6 Playwright extension added back to Dockerfile to resolve conditional fix Alpine (musl) based systems (#1087) 2022-11-02 23:47:44 +01:00
dgtlmoon
defc7a340e 0.39.21 2022-11-02 15:12:33 +01:00
dgtlmoon
c197c062e1 Disable version check when pytest is running (#1084) 2022-11-01 18:26:29 +01:00
dgtlmoon
77b59809ca Removing unused code (#1070) 2022-10-28 18:36:07 +02:00
dgtlmoon
f90b170e68 Docker & python - Jq conditional pip requirements.txt include (Don't install in Windows because theres no Windows library/wheel) 2022-10-27 23:26:14 +02:00
dgtlmoon
c93ca1841c Docker & python - Use pip conditional requirements to not install playwright for ARM (unsupported on ARM) (#1067) 2022-10-27 23:17:05 +02:00
Sandro
57f604dff1 UI - Make fetch error more readable (#1038) 2022-10-27 16:40:24 +02:00
dgtlmoon
8499468749 Update README.md 2022-10-27 15:17:14 +02:00
dgtlmoon
7f6a13ea6c Re #1052 - Watch 'open' link should use any dynamic/template info (#1063) 2022-10-27 13:29:24 +02:00
dgtlmoon
9874f0cbc7 Remove accidental files 2022-10-27 12:43:02 +02:00
dgtlmoon
72834a42fd Backups and Snapshots - Data directory now fully portable, (all paths are relative) , refactored backup zip export creation 2022-10-27 12:35:26 +02:00
dgtlmoon
724cb17224 Re #1052 - Dynamic URLs, use variables in the URL (such as the current date, the date in a month, and other logic see https://github.com/dgtlmoon/changedetection.io/wiki/Handling-variables-in-the-watched-URL ) (#1057) 2022-10-24 23:20:39 +02:00
dgtlmoon
4eb4b401a1 API - system info - allow 5 minutes grace before watch is considered 'overdue' 2022-10-23 23:12:28 +02:00
dgtlmoon
5d40e16c73 API - Adding basic system info/system state API (#1051) 2022-10-23 19:15:11 +02:00
dgtlmoon
492bbce6b6 Build - Fix syntax in container build test (#1050) 2022-10-23 16:02:13 +02:00
dgtlmoon
0394a56be5 Building - Test container build on PR 2022-10-23 15:54:19 +02:00
Entepotenz
7839551d6b Testing - Use same version of playwright while running tests as in production builds (#1047) 2022-10-23 11:26:32 +02:00
Entepotenz
9c5588c791 update path for validation in the CONTRIBUTING.md (#1046) 2022-10-23 11:25:29 +02:00
dgtlmoon
5a43a350de History index safety check - Be sure that only valid history index lines are read (#1042) 2022-10-19 22:41:13 +02:00
Michael McMillan
3c31f023ce Option to Hide the Referer header from monitored websites. (#996) 2022-10-18 09:16:22 +02:00
dgtlmoon
4cbcc59461 0.39.20.4 2022-10-17 18:36:47 +02:00
dgtlmoon
4be0260381 Better cross platform file handling in diff and preview (#1034) 2022-10-17 18:36:22 +02:00
dgtlmoon
957a3c1c16 0.39.20.3 2022-10-17 17:43:35 +02:00
dgtlmoon
85897e0bf9 Windows - diff file handling improvements (#1031) 2022-10-17 17:40:28 +02:00
dgtlmoon
63095f70ea Also include tests in pip build 2022-10-17 17:13:15 +02:00
dgtlmoon
8d5b0b5576 Update README.md 2022-10-12 10:51:39 +02:00
dgtlmoon
1b077abd93 0.39.20.2 2022-10-12 09:53:59 +02:00
dgtlmoon
32ea1a8721 Windows - JQ - Make library optional so it doesnt break Windows pip installs (#1009) 2022-10-12 09:53:16 +02:00
dgtlmoon
fff32cef0d Adding test - Test the 'execute JS before changedetection' (#1006) 2022-10-11 14:40:36 +02:00
dgtlmoon
8fb146f3e4 0.39.20.1 2022-10-09 23:05:35 +02:00
dgtlmoon
770b0faa45 Code - check containers build when Dockerfile or requirements.txt changes (#1005) 2022-10-09 22:58:01 +02:00
dgtlmoon
f6faa90340 Adding make to Dockerfile build as required by jq for ARM devices 2022-10-09 22:29:18 +02:00
dgtlmoon
669fd3ae0b Dont use default Requests user-agent and accept headers in playwright+selenium requests, breaks sites such as united.com. (#1004) 2022-10-09 18:25:36 +02:00
dgtlmoon
17d37fb626 0.39.20 2022-10-09 16:13:32 +02:00
Yusef Ouda
dfa7fc3a81 Adds support for jq JSON path querying engine (#1001) 2022-10-09 16:12:45 +02:00
dgtlmoon
cd467df97a Adding link to BrightData Proxy info (#1003) 2022-10-09 15:51:57 +02:00
dgtlmoon
71bc2fed82 Remove quotationspage default watch 2022-10-09 14:06:07 +02:00
Hmmbob
738fcfe01c Notification library: Bump apprise to 1.1.0 (signal, opsgenie, pagerduty, bark and mailto fixes, adds support for BulkSMS and SMSEagle) (#1002) 2022-10-09 11:42:51 +02:00
dgtlmoon
3ebb2ab9ba Selenium fetcher - screenshot should be taken after 'wait' time, not before #873 2022-09-25 11:05:07 +02:00
dgtlmoon
ac98bc9144 Upgrade Playwright to 1.26 2022-09-24 23:51:26 +02:00
dgtlmoon
3705ce6681 Render Extract Configurable Delay Seconds should also apply after executing any JS #958 2022-09-24 23:48:03 +02:00
dgtlmoon
f7ea99412f Re #958 - remove change screensize, should be in 1280x720 default, was causing "Unable to retrieve content because the page is navigating and changing the content." on some sites 2022-09-19 14:02:32 +02:00
dgtlmoon
d4715e2bc8 Tidy up proxies.json logic, adding tests (#955) 2022-09-19 13:14:35 +02:00
dgtlmoon
8567a83c47 Update README.md - Include BrightData suggestion 2022-09-16 13:21:01 +02:00
dgtlmoon
77fdf59ae3 Improve Proxy minimum time debug output 2022-09-15 17:17:07 +02:00
dgtlmoon
0e194aa4b4 Default proxy settings fixes 2022-09-15 16:58:23 +02:00
dgtlmoon
2ba55bb477 Use proxies.json instead of proxies.txt - see wiki Proxies section (#945) 2022-09-15 15:25:23 +02:00
dgtlmoon
4c759490da Upgrade Playwright to 1.25 2022-09-15 15:10:40 +02:00
dgtlmoon
58a52c1f60 Update README.md 2022-09-13 15:29:05 +02:00
dgtlmoon
22638399c1 0.39.19.1 2022-09-11 09:23:43 +02:00
dgtlmoon
e3381776f2 Notification - code tidyup 2022-09-11 09:08:13 +02:00
dgtlmoon
26e2f21a80 Watch list & notification - Adding extra list batch operations for Mute, Unmute, Reset-to-default 2022-09-10 15:29:39 +02:00
dgtlmoon
b6009ae9ff Notification - Reset defaults button should be on edit page only 2022-09-10 15:19:18 +02:00
dgtlmoon
b046d6ef32 Notification watch settings - add button to make watch use defaults (empties the settings) 2022-09-10 15:11:31 +02:00
dgtlmoon
e154a3cb7a Notification system update - set watch to use defaults if it is the same as the default 2022-09-10 15:01:11 +02:00
Jason Nader
1262700263 Fix typo (#924) 2022-09-09 12:08:01 +02:00
dgtlmoon
434c5813b9 0.39.19 2022-09-08 20:16:35 +02:00
dgtlmoon
0a3dc7d77b Update README.md 2022-09-08 20:15:23 +02:00
dgtlmoon
a7e296de65 Tweaks to python PIP readme 2022-09-08 17:53:58 +02:00
dgtlmoon
bd0fbaaf27 Use play and pause separate icons (#919) 2022-09-08 17:50:45 +02:00
dgtlmoon
0c111bd9ae Further notification settings refinement (#910) 2022-09-08 09:10:04 +02:00
dgtlmoon
ed9ac0b7fb Reliability improvement - Check watch UUID exists when reporting missing path (#915) 2022-09-07 23:04:35 +02:00
dgtlmoon
743a3069bb repair pip readme 2022-09-04 15:23:32 +02:00
dgtlmoon
fefc39427b Test improvement - Visual selector data loads as JSON (#895) 2022-08-31 16:32:50 +02:00
dgtlmoon
2c6faa7c4e Cleaner separation of watch/global notification settings (#894) 2022-08-31 15:49:13 +02:00
dgtlmoon
6168cd2899 Code maintenance - Removing old function (#875) 2022-08-31 15:23:10 +02:00
dgtlmoon
f3c7c969d8 Show screenshot age in [preview] 2022-08-25 11:18:00 +02:00
dgtlmoon
1355c2a245 Update README.md 2022-08-25 11:00:20 +02:00
dgtlmoon
96cf1a06df Update README.md 2022-08-24 23:26:55 +02:00
dgtlmoon
019a4a0375 Update README.md 2022-08-24 09:52:11 +02:00
dgtlmoon
db2f7b80ea Update bug_report.md 2022-08-20 15:30:53 +02:00
dgtlmoon
bfabd7b094 Update bug_report.md 2022-08-20 15:28:01 +02:00
dgtlmoon
d92dbfe765 Update README.md 2022-08-20 00:39:37 +02:00
dgtlmoon
67d2441334 0.39.18 2022-08-19 11:37:26 +02:00
dgtlmoon
3c30bc02d5 More data saving pre-checks (#863) 2022-08-18 23:25:23 +02:00
dgtlmoon
dcb54117d5 Update screenshot 2022-08-18 15:29:34 +02:00
dgtlmoon
b1e32275dc Checkbox operations - reorder buttons for safety 2022-08-18 15:10:05 +02:00
dgtlmoon
e2a6865932 UI feature - Basic checkbox/group operations (#861) 2022-08-18 14:48:21 +02:00
dgtlmoon
f04adb7202 Bug fix - automatically queued watch checks weren't always being processed sequentially 2022-08-18 13:41:28 +02:00
dgtlmoon
1193a7f22c Playwright - Support proxy auth mechanisms (#859) 2022-08-18 09:46:28 +02:00
dgtlmoon
0b976827bb Update README.md 2022-08-17 22:11:04 +02:00
dgtlmoon
280e916033 Update README.md 2022-08-17 22:00:46 +02:00
dgtlmoon
5494e61a05 Skip processing when watch was deleted 2022-08-17 13:29:32 +02:00
dgtlmoon
e461c0b819 Playwright fetcher didn't report low level HTTP errors correctly (like Connection Refused) (#852) 2022-08-17 13:25:08 +02:00
dgtlmoon
d67c654f37 Be sure visual-selector data is set when xPath/CSS filter is not yet found (#851) 2022-08-17 13:21:06 +02:00
dgtlmoon
06ab34b6af Visual selector data not being saved by refactor 2022-08-16 16:53:15 +02:00
dgtlmoon
ba8676c4ba 'Save chrome screenshot' checkbox never used, removing, we always save the screenshot. (#844) 2022-08-16 16:18:09 +02:00
dgtlmoon
4899c1a4f9 Crash fix: Data store sub-directories werent always being created when needed (#842) 2022-08-16 15:17:36 +02:00
dgtlmoon
9bff1582f7 Make the table header easier to understand when sorting (#840) 2022-08-16 12:49:08 +02:00
dgtlmoon
269e3bb7c5 Column sorting (#838) 2022-08-16 10:45:36 +02:00
dgtlmoon
9976f3f969 Update README.md 2022-08-15 22:27:45 +02:00
dgtlmoon
1f250aa868 Revert "don't process paused entries after queue", so we can still manually recheck a paused watch 2022-08-15 22:19:17 +02:00
dgtlmoon
1c08d9f150 Remove 'last-changed' from url-watches.json and always calculate from history index (#835) 2022-08-15 21:14:18 +02:00
dgtlmoon
9942107016 Massive improvements to error handling - show separate output for non HTTP 200 status replies 2022-08-15 18:56:53 +02:00
dgtlmoon
1eb5726cbf Execute JS should happen after waiting seconds 2022-08-15 11:27:04 +02:00
dgtlmoon
b3271ff7bb Upgrade playwright python driver (#834) 2022-08-14 19:53:42 +02:00
dgtlmoon
f82d3b648a Crash protection - handle the case where watch was deleted while being checked (#833) 2022-08-14 19:13:45 +02:00
dgtlmoon
034b1330d4 Don't process a watch if it was paused after being queued (#825) 2022-08-09 10:48:18 +02:00
Hmmbob
a7d005109f Notification Library Update (fixes for Home Assistant) - update requirements.txt (#818) 2022-08-08 08:48:38 +02:00
dgtlmoon
048c355e04 Remove social links for now 2022-08-07 19:24:27 +02:00
dgtlmoon
4026575b0b 0.39.17.2 2022-08-05 15:53:09 +02:00
dgtlmoon
8c466b4826 Test fix - Remove debug from test 2022-08-05 08:26:37 +02:00
dgtlmoon
6f072b42e8 Security update - Password could be unset from settings form unexpectedly (#808) 2022-08-05 00:05:43 +02:00
dgtlmoon
e318253f31 Disable SIGCHLD Handler for now - keeping SIGTERM for DB writes 2022-08-04 23:48:36 +02:00
dgtlmoon
f0f2fe94ce Handle SIGTERM for cleaner shutdowns (#737) 2022-08-02 10:21:25 +02:00
dgtlmoon
26f5c56ba4 Remove [save & preview] button, the preview is not updated live so it can lead to confusion (#801) 2022-08-01 14:47:00 +02:00
dgtlmoon
a1c3107cd6 Feature - priority queue - edited and added watches should get checked before automatically queued watches (#799) 2022-07-31 15:35:35 +02:00
dgtlmoon
8fef3ff4ab [preview current] cleanup code and add test 2022-07-30 20:11:56 +02:00
dgtlmoon
baa25c9f9e Feature - mute notifications (#791) 2022-07-29 21:09:55 +02:00
dgtlmoon
488699b7d4 Test improvement - remove unnecessary step 2022-07-29 10:23:59 +02:00
dgtlmoon
cf3a1ee3e3 0.39.17.1 2022-07-29 10:13:29 +02:00
dgtlmoon
daae43e9f9 Bug fix: Filter failure detection notification was interfering with change-detection results, added test case (#786) 2022-07-29 10:11:49 +02:00
dgtlmoon
cdeedaa65c README.md - new Discord invite link 2022-07-28 23:07:07 +02:00
dgtlmoon
3c9d2ded38 0.39.17 2022-07-28 13:07:51 +02:00
dgtlmoon
9f4364a130 Add https://discord.com/api notification hook to the automatic truncation due to Discords 2000 char limit 2022-07-28 12:34:55 +02:00
dgtlmoon
5bd9eaf99d UI Feature - Add watch in "paused" state, saving then unpauses (#779) 2022-07-28 12:13:26 +02:00
dgtlmoon
b1c51c0a65 Enhancement - support xPath text() function filter, for example "//title/text()" in RSS feeds (#778) 2022-07-28 11:50:31 +02:00
dgtlmoon
232bd92389 Bug fix - Filter "Only trigger when new lines appear" should check all history, not only the first item (#777) 2022-07-28 10:16:19 +02:00
dgtlmoon
e6173357a9 Visual Selector direct element finder fix 2022-07-28 09:19:10 +02:00
dgtlmoon
f2b8888aff Update README.md 2022-07-27 14:25:24 +02:00
dgtlmoon
9c46f175f9 Update README.md links 2022-07-27 14:23:18 +02:00
dgtlmoon
1f27865fdf Filter failure notification send default enable now controlled by setting Env var 2022-07-27 00:01:51 +02:00
dgtlmoon
faa42d75e0 Refactor of extract text filter - Regex, support Regex (groups) and all python regex flags via /something/aiLmsux (#773) 2022-07-26 17:34:34 +02:00
dgtlmoon
3b6e6d85bb Update README.md - adding LinkedIn link 2022-07-25 00:28:41 +02:00
dgtlmoon
30d6a272ce Update README.md - Adding Discord and YouTube links 2022-07-24 23:06:42 +02:00
dgtlmoon
291700554e Bug fix for alerting when xPath based filters are no longer present (#772) 2022-07-23 19:39:52 +02:00
dgtlmoon
a82fad7059 Send notification when CSS/xPath filter is missing after more than 6 (configurable) attempts (#771) 2022-07-23 17:19:00 +02:00
dgtlmoon
c2fe5ae0d1 mailto plaintext handling fix for 'plaintext' apprise integration 2022-07-23 16:55:31 +02:00
dgtlmoon
5beefdb7cc Minor code cleanups 2022-07-23 13:18:44 +02:00
dgtlmoon
872bbba71c Notifications - email - Correctly send plaintext notification email with plaintext header (#767) 2022-07-21 15:22:20 +02:00
Jonathon Sisson
d578de1a35 Form text tweak - Regex clarification (#766) 2022-07-21 10:05:59 +02:00
dgtlmoon
cdc104be10 Update README.md 2022-07-20 14:37:45 +02:00
dgtlmoon
dd0eeca056 Handle simple obfuscations - HomeDepot.com style price obfuscation (#764) 2022-07-20 14:02:22 +02:00
dgtlmoon
a95468be08 Fixing docker-compose.yml PLAYWRIGHT_DRIVER_URL example URL 2022-07-15 20:45:29 +02:00
Brandon Wees
ace44d0e00 Notifications fix - Discord - added discord webhook base url to truncation rules (#753)
Co-authored-by: bwees <branonwees@gmail.com>
2022-07-14 17:41:12 +02:00
dgtlmoon
ebb8b88621 Update Playwright URI Env example with stealth setting and CORS workaround (more reliable fetching) 2022-07-12 22:36:22 +02:00
dgtlmoon
12fc2200de remove extra file 2022-07-12 22:32:20 +02:00
dgtlmoon
52d3d375ba removing package-lock.json - not required to be in git 2022-07-10 20:29:11 +02:00
dgtlmoon
08117089e6 Share-icon cleanups 2022-07-10 20:24:49 +02:00
dgtlmoon
2ba3a6d53f Test improvement: Extract text should return all matches 2022-07-10 20:05:48 +02:00
dgtlmoon
2f636553a9 Bug fix: RSS Feed should also announce utf-8 charset 2022-07-10 18:50:21 +02:00
Freddie Leeman
0bde48b282 Regex extract filter: Return all regex results instead of first match (#730) 2022-07-10 15:09:10 +02:00
dgtlmoon
fae1164c0b Ability to specify JS before running change-detection (#744) 2022-07-10 13:56:01 +02:00
dgtlmoon
169c293143 Playwright - log console errors to output 2022-07-10 13:55:29 +02:00
dgtlmoon
46cb5cff66 UI Improvement - Clarifying "Visual Filter" tool as "Visual Selector Filter" 2022-07-10 12:51:12 +02:00
Simo Elalj
05584ea886 Use environment variables to override new watch settings defaults (user-agent, timeout, workers) (#742) 2022-07-08 20:50:04 +02:00
marvin8
176a591357 Update docker-compose.yml - Remove duplicate environment variables from playwright-chrome sample config in docker-compose.yml (#738) 2022-07-06 09:03:10 +02:00
dgtlmoon
15569f9592 0.39.16 2022-07-05 16:14:57 +02:00
dgtlmoon
5f9e475fe0 Fix notification apprise application name to changedetection.io #731 2022-06-30 23:11:03 +02:00
dgtlmoon
34b8784f50 Update README.md 2022-06-30 16:16:58 +02:00
dgtlmoon
2b054ced8c [new filter] Filter option - Trigger only when NEW content (lines) are detected ( compared to earlier text snapshots ) (#685) 2022-06-28 18:34:32 +02:00
dgtlmoon
6553980cd5 Playwright - Use HTTP Request Headers override (Cookie, etc) 2022-06-25 23:42:48 +02:00
jtagcat
7c12c47204 lang: prefer 'clear (snap) history' to 'scrub' (#721) 2022-06-25 13:43:57 +02:00
dgtlmoon
dbd9b470d7 Minor diff page improvements - list should be sorted 'newest first' and no need to include the current version to compare against (#716) 2022-06-23 10:16:05 +02:00
dgtlmoon
83555a9991 bug fix: last_changed was being set on the first fetch, should only be set on the change after the first fetch #705 2022-06-23 09:41:55 +02:00
dgtlmoon
5bfdb28bd2 Update README.md 2022-06-16 11:02:22 +02:00
dgtlmoon
31a6a6717b Improve docker-compose.yml browserless docker container example, add env var for STEALTH and BLOCK_ADS (#701) 2022-06-15 23:50:48 +02:00
dgtlmoon
7da32f9ac3 New filter - Block change-detection if text matches - for example, block change-detection while the text "out of stock" is on the page, know when the text is no longer on the page (#698) 2022-06-15 22:59:37 +02:00
dgtlmoon
bb732d3d2e Docker containers - :latest is now stable release, :dev is now master/nightly 2022-06-15 22:59:21 +02:00
dgtlmoon
485e55f9ed Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-06-15 19:12:34 +02:00
dgtlmoon
601a20ea49 Trigger filters improvement- it's possible some changes weren't getting detected because the previous checksum only recorded when an event occurred (#697) 2022-06-15 19:11:20 +02:00
dgtlmoon
76996b9eb8 Some changes werent getting triggered because the previous checksum only recorded when an event occured 2022-06-15 17:18:46 +02:00
dgtlmoon
fba2b1a39d Notifications regression bug in 0.39.15 - only sent the first notification URL 2022-06-15 17:05:03 +02:00
dgtlmoon
4a91505af5 Playwright screenshots - no need for high-res "bug workaround" screenshot, use lower quality/faster configurable image quality env var 2022-06-15 10:52:24 +02:00
dgtlmoon
4841c79b4c Adding extra check when updating DB on ReplyWithContentButNoText 2022-06-14 19:54:35 +02:00
dgtlmoon
2ba00d2e1d Notifications log - log what was sent after applying all cleanups 2022-06-14 17:01:13 +02:00
dgtlmoon
19c96f4bdd Re #555 - tgram:// notifications - strip added HTML tag which is not supported by Telegram 2022-06-14 12:00:21 +02:00
dgtlmoon
82b900fbf4 Give more helpful error message when a page doesnt load 2022-06-14 08:16:22 +02:00
dgtlmoon
358a365303 Tweaks to playwright fetch code - better timeout handling 2022-06-13 23:39:43 +02:00
dgtlmoon
a07ca4b136 Re #580 - New functionality - Random "jitter" delay to requests (#681) 2022-06-13 12:41:53 +02:00
dgtlmoon
ba8cf2c8cf 0.39.15 2022-06-12 14:05:34 +02:00
dgtlmoon
3106b6688e Watch overview list - adding spinner to make it easier to see whats currently being 'Checked' 2022-06-12 12:52:17 +02:00
dgtlmoon
2c83845dac Preview section - add helpful check 2022-06-12 11:10:06 +02:00
dgtlmoon
111266d6fa Send test notification - improved handling of errors 2022-06-12 10:47:00 +02:00
dgtlmoon
ead610151f Notification log - also log normal requests and make the log easier to find 2022-06-11 23:07:09 +02:00
dgtlmoon
7e1e763989 Update bug_report.md 2022-06-11 00:43:28 +02:00
dgtlmoon
327cc4af34 Use correct RSS CDATA handling (#662) 2022-06-08 18:40:01 +02:00
dgtlmoon
6008ff516e Improve logging (#671) 2022-06-08 18:32:41 +02:00
dgtlmoon
cdcf4b353f New [scrub] button when editing a watch - scrub single watch history (#672) 2022-06-08 18:32:25 +02:00
dgtlmoon
1ab70f8e86 Diff + Preview - Hide date selector widget when viewing screenshots as its not yet possible to compare screenshots (but will be soon!) 2022-06-07 19:53:13 +02:00
dgtlmoon
8227c012a7 Diff + Preview - Fixing screenshot behaviour after preference change 2022-06-07 19:51:17 +02:00
dgtlmoon
c113d5fb24 Screenshot handling on the diff/preview section refactor (#630) 2022-06-07 19:22:42 +02:00
dgtlmoon
8c8d4066d7 Shared watches - include "Extract text" filter 2022-06-07 17:06:05 +02:00
dgtlmoon
277dc9e1c1 Improve error message when filter not found in page result (#666) 2022-06-07 16:43:57 +02:00
dgtlmoon
fc0fd1ce9d "Extract text" filter - improve placeholder example 2022-06-06 18:26:47 +02:00
dgtlmoon
bd6127728a Visual selector - 'clear selection' button should clear the filter also 2022-06-06 17:07:29 +02:00
dgtlmoon
4101ae00c6 New feature - "Extract text" filter ability (#624) 2022-06-06 16:57:50 +02:00
dgtlmoon
62f14df3cb Fixing RSS feed HTML content formatting (#662) 2022-06-06 10:24:39 +02:00
Fuzzy
560d465c59 Update notification library - Improving telegram support 2022-06-06 10:07:50 +02:00
dgtlmoon
7929aeddfc 'Mark all viewed' button was missing in this version, added test also. (#652) 2022-06-02 10:01:03 +02:00
dgtlmoon
8294519f43 Content fetcher - Handle when a page doesnt load properly 2022-06-01 13:12:37 +02:00
dgtlmoon
8ba8a220b6 Playwright - Correctly close browser context/sessions on exceptions 2022-06-01 12:59:44 +02:00
dgtlmoon
aa3c8a9370 Move history data to a textfile, improves memory handling (#638) 2022-05-31 23:43:50 +02:00
dgtlmoon
dbb5468cdc Update feature_request.md 2022-05-31 22:07:22 +02:00
dgtlmoon
329c7620fb Remove UK Covid news 2022-05-31 22:04:35 +02:00
Amos (LFlare) Ng
1f974bfbb0 Visual Selector fix: Firefox compatibility - Visual Selector (#646) 2022-05-31 09:04:01 +02:00
Tim Loderhose
437c8525af Remove group tag arbitrary length limit (#645) 2022-05-30 18:28:53 +02:00
dgtlmoon
a2a1d5ae90 Distill.io import bug fix when no tags assigned to a watch (#557) 2022-05-29 22:04:23 +02:00
dgtlmoon
2566de2aae Ignore whitespace on by default 2022-05-28 13:30:57 +02:00
dgtlmoon
dfec8dbb39 Visual Selector - clear events when changing tabs 2022-05-25 15:47:30 +02:00
dgtlmoon
5cefb16e52 Minor code cleanup 2022-05-25 15:38:40 +02:00
dgtlmoon
341ae24b73 Re #616 - content trigger - adding extra test (#620) 2022-05-25 15:31:59 +02:00
dgtlmoon
f47c2fb7f6 README.md update Visual Selector tool - tidy up screenshots, improve text 2022-05-25 11:44:59 +02:00
dgtlmoon
9d742446ab Playwright - ByPass CSP for more reliable JS scraping, disable accept downloads 2022-05-25 11:05:18 +02:00
dgtlmoon
e3e022b0f4 VisualSelector - Better handling of filter targets that are no longer available in the HTML 2022-05-25 10:23:43 +02:00
dgtlmoon
6de4027c27 Update bug_report.md 2022-05-24 14:13:11 +02:00
dgtlmoon
cda3837355 pip build fix - include API module 2022-05-24 00:16:50 +02:00
dgtlmoon
7983675325 Visual Selector - be more resilient when sites interfere with the xPath scraping 2022-05-24 00:10:38 +02:00
dgtlmoon
eef56e52c6 Adding new Visual Selector for choosing the area of the webpage to monitor - playwright/browserless only (#566) 2022-05-23 23:44:51 +02:00
dgtlmoon
8e3195f394 0.39.14 2022-05-23 14:40:26 +02:00
dgtlmoon
e17c2121f7 Fix encoding errors with XPath filters from UTF-8 responses (#619) 2022-05-20 18:07:08 +02:00
dgtlmoon
07e279b38d API Interface (#617) 2022-05-20 16:27:51 +02:00
dgtlmoon
2c834cfe37 Add note that changedetection is not performed on the screenshot just yet (WIP https://github.com/dgtlmoon/changedetection.io/pull/419 ) 2022-05-20 12:52:41 +02:00
dgtlmoon
dbb5c666f0 Fixing edit template HTML 2022-05-18 14:09:39 +02:00
dgtlmoon
70b3493866 Proxy settings on watch should have a "[ ] default" option (#610) 2022-05-18 13:59:54 +02:00
dgtlmoon
3b11c474d1 Input field tidyup (#611) 2022-05-18 13:59:17 +02:00
dgtlmoon
890e1e6dcd Update wiki link for 'More info' about sharing a watch and its configuration 2022-05-17 22:44:36 +02:00
dgtlmoon
6734fb91a2 Option to control if pages with no renderable content are a change (example: JS webapps that dont render any text sometimes) (#608) 2022-05-17 22:22:00 +02:00
dgtlmoon
16809b48f8 Playwright - raise EmptyReply on empty reply, no need to process further 2022-05-17 18:40:15 +02:00
dgtlmoon
67c833d2bc Re #214 - configurable wait extra seconds for webdriver requests before extracting text (#606) 2022-05-17 18:35:33 +02:00
weeix
31fea55ee4 Fix PLAYWRIGHT_DRIVER_URL default value (cf. #587) (#599) 2022-05-14 22:34:44 +02:00
dgtlmoon
b6c50d3b1a Update PIP readme.md 2022-05-10 22:46:59 +02:00
dgtlmoon
034507f14f Fixing Pip install problem - Update MANIFEST to include model/ subdir, improving imports (#593) 2022-05-10 22:45:08 +02:00
dgtlmoon
0e385b1c22 0.39.13 2022-05-10 17:24:38 +02:00
dgtlmoon
f28c260576 Distill.io JSON export file importer (#592) 2022-05-10 17:15:41 +02:00
dgtlmoon
18f0b63b7d Ability to specify a list of proxies to choose from, always using the first one by default, See wiki (#591) 2022-05-08 20:35:36 +02:00
Thilo-Alexander Ginkel
97045e7a7b Improving Playwright docs (#588) 2022-05-07 22:23:17 +02:00
dgtlmoon
9807cf0cda Playwright - code fix 2022-05-07 17:29:59 +02:00
dgtlmoon
d4b5237103 Playwright fetcher - more reliable by just waiting arbitrary seconds after the last network IO 2022-05-07 17:14:40 +02:00
dgtlmoon
dc6f76ba64 Make proxy configuration more consistent - see https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration (#585) 2022-05-07 16:37:56 +02:00
dgtlmoon
1f2f93184e Playwright fetcher - use the correct default User-Agent 2022-05-06 23:59:38 +02:00
dgtlmoon
0f08c8dda3 Toggle visibility of extra requests options/settings when not in use (#584) 2022-05-06 23:40:32 +02:00
dgtlmoon
68db20168e Add new fetch method: Playwright Chromium (Selenium/WebDriver alternative) (#489) 2022-05-02 21:40:40 +02:00
dgtlmoon
1d4474f5a3 Simplify scrub operation (simply cleans all) (#575) 2022-05-02 21:10:23 +02:00
dgtlmoon
613308881c Bugfix - dont update record when deleted during check 2022-05-01 21:41:29 +02:00
dgtlmoon
f69585b276 Improving support info in README.md 2022-04-29 20:26:02 +02:00
dgtlmoon
0179940df1 Handle deletions better (#570) 2022-04-29 19:12:33 +02:00
dgtlmoon
c0d0424e7e Data storage bug fix #569 2022-04-29 18:26:15 +02:00
dgtlmoon
014dc61222 Upgrade notifications library - fixing marketup in email subject 2022-04-29 09:39:40 +02:00
dgtlmoon
06517bfd22 Ability to 'Share' a watch by a generated link, this will include all filters and triggers - see Wiki (#563) 2022-04-26 10:52:08 +02:00
dgtlmoon
b3a115dd4a Upgrade notifications library Re #555 - fixing telegram HTML markup in notification title 2022-04-25 23:12:32 +02:00
dgtlmoon
ffc4215411 Unify MINIMUM_SECONDS_RECHECK_TIME env var variable to 60 seconds 2022-04-24 20:37:30 +02:00
dgtlmoon
9e708810d1 Seconds/minutes/hours/days between checks form field upgrade from 'minutes' only (#512) 2022-04-24 16:56:32 +02:00
dgtlmoon
1e8aa6158b Form styling improvements 2022-04-24 14:40:53 +02:00
dgtlmoon
015353eccc Form field handling improvements - fixing field list handler for empty lines 2022-04-24 13:53:13 +02:00
dgtlmoon
501183e66b Fix "Add email" button in main global notification settings 2022-04-22 10:51:52 +02:00
dgtlmoon
def74f27e6 Test notification button fixed in main settings (#556) 2022-04-21 21:36:10 +02:00
dgtlmoon
37775a46c6 tgram:// be sure total notification size is always under their 4096 size limit 2022-04-21 16:28:15 +02:00
dgtlmoon
e4eaa0c817 Shows which items are already in the queue, disables adding to the queue if already in the recheck queue (#552) 2022-04-21 12:52:45 +02:00
dgtlmoon
206ded4201 Notifications - Signal API support, Ntfy support, hotmail, matrix, Gotify API fixes 2022-04-20 23:13:55 +02:00
dgtlmoon
9e71f2aa35 Discord:// notification size limit - also includes the notification title 2022-04-20 17:00:21 +02:00
dgtlmoon
f9594aeffb Fix spelling errors 2022-04-20 09:51:53 +02:00
dgtlmoon
b4e1353376 Update README.md 2022-04-19 23:56:11 +02:00
dgtlmoon
5b670c38d3 Update README.md 2022-04-19 23:48:21 +02:00
dgtlmoon
2a9fb12451 Import speed improvements, and adding an import URL batch size of 5,000 to stop accidental CPU overload (#549) 2022-04-19 23:15:32 +02:00
dgtlmoon
6c3c5dc28a Ability to set the default fetch mode via the DEFAULT_FETCH_BACKEND variable 2022-04-19 23:15:00 +02:00
dgtlmoon
8f062bfec9 Refactor form handling (#548) 2022-04-19 21:43:07 +02:00
dgtlmoon
380c512cc2 Adding support for change detection of HTML source-code via "source:https://website.com" prefix (#540) 2022-04-12 17:36:29 +02:00
dgtlmoon
d7ed7c44ed Re-label the quick-add widget placeholder 'tag' to 'watch group' 2022-04-12 10:55:43 +02:00
dgtlmoon
34a87c0f41 HTTP Fetcher code improvements 2022-04-12 08:36:08 +02:00
dgtlmoon
4074fe53f1 Adding RSS metadata auto-discovery 2022-04-12 07:35:47 +02:00
Tristan Hill
44d599d0d1 Upgrade WTforms form handler to v3 (#523) 2022-04-09 19:50:56 +02:00
dgtlmoon
615fe9290a 0.39.12 2022-04-09 14:16:30 +02:00
dgtlmoon
2cc6955bc3 Miscellaneous settings form visual improvements (#535) 2022-04-09 12:15:34 +02:00
dgtlmoon
9809af142d Option to render links as [Some Text ](/link), adds the ability to change-detect on hyperlink changes 2022-04-09 10:35:14 +02:00
dgtlmoon
1890881977 Specify our Discord avatar_url as default avatar_url 2022-04-08 18:35:59 +02:00
dgtlmoon
9fc2fe85d5 Minor git updates 2022-04-08 17:21:42 +02:00
dgtlmoon
bb3c546838 Fix screenshot tab name 2022-04-08 17:08:06 +02:00
dgtlmoon
165f794595 Discord:// notifications should be cut to 2000 chars or Discord will not process them. (#531 + #323) 2022-04-08 16:32:04 +02:00
dgtlmoon
a440eece9e Make long reports in the notification error log easier to read 2022-04-08 14:12:42 +02:00
dgtlmoon
34c83f0e7c [Add email] button in notification settings with a prefix set from NOTIFICATION_MAIL_BUTTON_PREFIX env variable when defined. (#528) 2022-04-07 18:18:23 +02:00
dgtlmoon
f6e518497a Update README.md 2022-04-07 14:59:31 +02:00
dgtlmoon
63e91a3d66 Skip processing a watch into the RSS feed if there's not enough data to examine (fixes Internal Server Error when accessing the RSS feed) (#521) 2022-04-05 20:31:31 +02:00
dgtlmoon
3034d047c2 Introduce an AJAX button for sending test notifications instead of the checkbox (#519) 2022-04-05 18:04:26 +02:00
dgtlmoon
2620818ba7 Make text tab always available at default 2022-04-02 14:55:40 +02:00
dgtlmoon
9fe4f95990 When fetching a snapshot via Chrome, make the most recent screenshot available on the Diff and Preview pages (#516) 2022-04-02 14:49:32 +02:00
dgtlmoon
ffd2a89d60 Remove 'unviewed' status in watch table when Diff link clicked (#514) 2022-03-31 11:01:07 +02:00
dgtlmoon
8f40f19328 RSS feed CDATA should contain difference output 2022-03-30 10:51:10 +02:00
dgtlmoon
082634f851 Fix - {diff} and {diff_full} notifications tokens were not always including the full output 2022-03-29 19:18:26 +02:00
dgtlmoon
334010025f Update README.md 2022-03-26 14:02:56 +01:00
dgtlmoon
81aa8fa16b Update README.md 2022-03-26 09:56:56 +01:00
dgtlmoon
c79d6824e3 Minor UI cleanups (mobile tabs, font sizing) (#503) 2022-03-25 23:37:28 +01:00
zznidar
946377d2be Fix typo in Filters & Triggers settings. (#495) 2022-03-23 23:18:04 +01:00
zznidar
5db9a30ad4 Add autofocus attribute to password login field (#496) 2022-03-23 23:17:47 +01:00
dgtlmoon
1d060225e1 0.39.11 2022-03-23 09:42:51 +01:00
dgtlmoon
7e0f0d0fd8 Microsoft Windows installation fixes (#492) 2022-03-22 23:08:08 +01:00
dgtlmoon
8b2afa2220 GitHub tweak - container tags should be CSV list (Fix ghcr.io not building) 2022-03-22 00:08:05 +01:00
dgtlmoon
f55ffa0f62 GitHub tweak - build containers also on push to master 2022-03-21 23:08:17 +01:00
dgtlmoon
942c3f021f Allow changedetector to ignore status codes as a per-site setting (#479) (#485)
Co-authored-by: Ara Hayrabedian <ara.hayrabedian@gmail.com>
2022-03-21 23:03:54 +01:00
dgtlmoon
5483f5d694 Security update - Use CSRF token protection for forms, make "remove password" use HTTP Post (#484) 2022-03-21 22:54:27 +01:00
dgtlmoon
f2fa638480 Security update - Protect against file:/// type access by webdriver/chrome. (#483) 2022-03-21 20:59:20 +01:00
dgtlmoon
82d1a7f73e Only build container on GitHub releases, not tests 2022-03-20 16:57:36 +01:00
dgtlmoon
9fc291fb63 Also change container names to help stop some DNS issues 2022-03-17 19:59:37 +01:00
dgtlmoon
3e8a15456a Detect byte-encoding when the server mishandles the content-type header reply (#472) 2022-03-17 10:28:02 +01:00
dgtlmoon
2a03f3f57e Improving form/edit example markup 2022-03-13 12:00:45 +01:00
dgtlmoon
ffad5cca97 JSON diff/preview should use utf-8 encoding where possible (#465) 2022-03-13 11:37:51 +01:00
Tim Loderhose
60a9a786e0 Fix typo in settings form 2022-03-13 10:55:37 +01:00
dgtlmoon
165e950e55 Add python venv to .gitignore 2022-03-13 10:53:33 +01:00
dgtlmoon
c25294ca57 0.39.10 2022-03-12 17:28:30 +01:00
Tim Loderhose
d4359c2e67 Add filter to remove elements by CSS rule from HTML before change detection is run (#445) 2022-03-12 13:29:30 +01:00
dgtlmoon
44fc804991 Minor updates to filters form text 2022-03-12 11:20:43 +01:00
dgtlmoon
b72c9eaf62 Re #448 - Dont use changedetection.io as the container name and hostname, fix problems fetching from the real changedetection.io webserver :) 2022-03-12 08:24:51 +01:00
dgtlmoon
7ce9e4dfc2 Testing - Refactor HTTP Request Type test (#453) 2022-03-11 18:50:02 +01:00
dgtlmoon
3cc6586695 Make table header font size the same as content 2022-03-07 13:03:59 +01:00
dgtlmoon
09204cb43f Adjust background colours 2022-03-06 19:03:59 +01:00
dgtlmoon
a709122874 Handle the case where the visitor is already logged-in and tries to login again (#447) 2022-03-06 18:19:05 +01:00
dgtlmoon
efbeaf9535 Make the Request Override settings easier to understand 2022-03-06 17:23:21 +01:00
dgtlmoon
1a19fba07d Minor tweak to notification token table 2022-03-06 17:10:30 +01:00
dgtlmoon
eb9020c175 Style tweak to watch form 2022-03-06 17:05:23 +01:00
dgtlmoon
13bb44e4f8 Login form style fixes 2022-03-06 17:03:15 +01:00
dgtlmoon
47f294c23b Upgrade apprise notification engine to 0.9.7 (important telegram fixes) 2022-03-05 13:14:14 +01:00
dgtlmoon
a4cce16188 Remove pytest from production release pip requirements 2022-03-05 13:12:15 +01:00
dgtlmoon
69aec23d1d Style fix for background image relative to X-Forwarded-Prefix when running via reverse proxy subdirectory 2022-03-05 13:08:57 +01:00
dgtlmoon
f85ccffe0a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-03-04 13:13:54 +01:00
dgtlmoon
0005131472 Re-arranging primary links so the important ones are easier to find on mobile 2022-03-04 13:06:39 +01:00
dgtlmoon
3be1f4ea44 Set authentication cookie path relative to X-Forwarded-Prefix when running via reverse proxy subdirectory (#446) 2022-03-04 11:23:32 +01:00
dgtlmoon
46c72a7fb3 Upgrade inscriptis HTML converter to version 2.2~ (#434) 2022-03-01 17:58:54 +01:00
dgtlmoon
96664ffb10 Better text/plain detection and refactor tests (#443) 2022-03-01 17:50:15 +01:00
dgtlmoon
615fa2c5b2 Tweak support tabs and text (#440) 2022-02-28 22:39:32 +01:00
dgtlmoon
fd45fcce2f Include link to changedetection.io hosted option (#439) 2022-02-28 15:47:59 +01:00
dgtlmoon
75ca7ec504 Improved CPU usage around the loop responsible for what sites needs to be checked 2022-02-28 15:08:51 +01:00
dgtlmoon
8b1e9f6591 Update README.md with hosting options 2022-02-26 18:42:54 +01:00
dgtlmoon
883aa968fd 0.39.9 2022-02-24 17:02:50 +01:00
dgtlmoon
3240ed2339 Minor reliability upgrade for large datasets - retry deepcopy (#436) 2022-02-24 16:58:51 +01:00
dgtlmoon
a89ffffc76 "Recheck" button should work when entry is in paused state 2022-02-24 16:49:48 +01:00
dgtlmoon
fda93c3798 Better file exception handling on saving index JSON 2022-02-24 16:36:24 +01:00
dgtlmoon
a51c555964 Fix small issue in highlight trigger/ignore preview page with setting the background colours, add test 2022-02-23 12:30:36 +01:00
dgtlmoon
b401998030 Ensure string matching on the ignore filter is always case-INsensitive 2022-02-23 12:01:11 +01:00
dgtlmoon
014fda9058 Ability to visualise trigger and filter rules against the current snapshot on the preview page 2022-02-23 10:49:25 +01:00
dgtlmoon
dd384619e0 Update README.md 2022-02-19 13:41:54 +01:00
Michael
85715120e2 XPath RegularExpression support 2022-02-19 13:40:57 +01:00
dgtlmoon
a0e4f9b88a better checking of JSON type 2022-02-17 18:16:47 +01:00
dgtlmoon
04bef6091e Make system level errors from the HTTP fetchers easier to find (#421) 2022-02-13 23:43:45 +01:00
dependabot[bot]
536948c8c6 Bump node-sass from 6.0.1 to 7.0.0 in /changedetectionio/static/styles (#415)
Bumps [node-sass](https://github.com/sass/node-sass) from 6.0.1 to 7.0.0.
Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2022-02-11 09:10:55 +01:00
dgtlmoon
d4f4ab306a Dont allow redirect on login, it's safer and more reliable this way (#414) 2022-02-08 21:12:44 +01:00
dgtlmoon
8d2e240a2a When using Env. FETCH_WORKERS or WEBDRIVER_DELAY_BEFORE_CONTENT_READY , it should be type int 2022-02-08 20:01:24 +01:00
dgtlmoon
d7ed479ca2 0.39.8 2022-02-08 18:56:10 +01:00
dgtlmoon
f25cdf0a67 Number of fetching workers can be overriden by Env "FETCH_WORKERS" (#413) 2022-02-08 18:27:56 +01:00
dgtlmoon
5214a7e0f3 Adding Env var "WEBDRIVER_DELAY_BEFORE_CONTENT_READY" to wait n seconds before extracting the text from the browser 2022-02-08 18:24:25 +01:00
dgtlmoon
eb3dca3805 Language fix "watches are rechecking." it actually puts them into an internal queue "watches are QUEUED for rechecking" 2022-02-08 13:00:18 +01:00
dgtlmoon
a580c238b6 Use flask url_for() for webdriver chrome icon instead of relative path 2022-02-05 23:25:57 +01:00
Alexander Aleksandrovič Klimov
7ca89f5ec3 Fix typo in the startup create-directory command suggestion (#405) 2022-02-05 19:46:02 +01:00
Alexander Aleksandrovič Klimov
8ab8aaa6ae Introduce -h option to allow listening not on 0.0.0.0. (#406) 2022-02-05 19:29:22 +01:00
dgtlmoon
22ef9afb93 Refactor tests for notification error log handler (#404) 2022-02-04 20:54:20 +01:00
dgtlmoon
abaec224f6 Notification error log handler (#403)
* Add a notifications debug/error log interface (Link available under the notification URLs list)
2022-02-04 19:29:39 +01:00
dgtlmoon
5a645fb74d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-02-04 17:31:54 +01:00
dgtlmoon
14db60e518 Add notification note - tgram:// bots cant send messages to other bots, so you should specify chat ID of non-bot user. 2022-02-04 17:31:32 +01:00
Radu Ursache
e250c552d0 fixed the reference to wiki for rpi section (#402) 2022-02-04 10:55:30 +01:00
dgtlmoon
8e54a17e14 /preview format doesnt need <pre> - fixing too many returnlines in content on diff/preview page 2022-02-02 14:39:42 +01:00
dgtlmoon
8607eccaad Update README.md 2022-02-02 11:33:22 +01:00
dgtlmoon
17511d0d7d Update README - Fix docker section 2022-01-30 15:20:26 +01:00
dgtlmoon
41b806228c Update README - Tidy up sections 2022-01-30 15:19:21 +01:00
dgtlmoon
453cf81e1d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-30 02:15:15 +01:00
dgtlmoon
0095b28ea3 Offer instance on Lemonade
Tidy README
2022-01-30 02:14:32 +01:00
dgtlmoon
73101a47e7 Ability to use a generated salted password in deployments as env var SALTED_PASS (#397)
* Ability to use a generated salted password in deployments as env var SALTED_PASS
2022-01-29 19:36:44 +01:00
dgtlmoon
03f776ca45 #323 Adding note about discord:// 2000 char limit (#392)
* Adding note about discord:// 2000 char limit
2022-01-28 10:38:04 +01:00
dgtlmoon
39b7be9e7a plaintext mime type fix - Don't attempt to extract HTML content from plaintext, this will remove lines and break changedetection (#391) 2022-01-27 23:16:50 +01:00
dgtlmoon
6611823962 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-27 23:01:17 +01:00
dgtlmoon
c1c453e4fe .add_watch() can accept empty tag
Use https://changedetection.io/CHANGELOG.txt as a nice default page to watch
2022-01-27 23:00:39 +01:00
Tim Loderhose
4887180671 Add option for tags on import (#377)
* Add option for tags on import and backup
2022-01-25 18:46:05 +01:00
dgtlmoon
ac7378b7fb Update CONTRIBUTING.md 2022-01-24 22:09:14 +01:00
dgtlmoon
eeba8c864d Update README.md 2022-01-22 15:35:07 +01:00
Travis Howse
abe88192f4 Fix bug where diff and diff_full were switched in notification templates. (#380) 2022-01-21 12:26:08 +01:00
dgtlmoon
af8efbb6d2 Closes #378 2022-01-19 23:16:49 +01:00
dgtlmoon
bbc2875ef3 0.39.7 2022-01-15 23:21:06 +01:00
dgtlmoon
b7ca10ebac Scrub watch snapshot fixes 2022-01-15 23:18:04 +01:00
dgtlmoon
a896493797 Simple HTTP auth (#372)
HTTP Basic Auth form validation
2022-01-15 22:52:39 +01:00
dgtlmoon
e5fe095f16 Adding note about JS pages 2022-01-12 18:18:40 +01:00
dgtlmoon
271181968f Notification settings defaults and validation (#361)
* Re #360 - Validate that when a notification URL is set, we have also a notification body and title, new install should have notification title/body defaults set.
2022-01-10 17:38:04 +01:00
dgtlmoon
8206383ee5 Filters settings helper text tidy-up 2022-01-09 14:36:07 +01:00
dgtlmoon
ecfc02ba23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-09 11:45:23 +01:00
dgtlmoon
3331ccd061 Add test for low-level network error text handling 2022-01-09 11:45:04 +01:00
Unpublished
bd8f389a65 Add API endpoint for current snapshot (#359) 2022-01-08 16:38:42 +01:00
dgtlmoon
bc74227635 Clarify notice/messages around changing ignore text 2022-01-05 20:42:45 +01:00
dgtlmoon
07c60a6acc Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-05 19:13:42 +01:00
dgtlmoon
7916faf58b 0.39.6 2022-01-05 19:13:36 +01:00
dgtlmoon
febb2bbf0d Heroku tweaks (backup download) (#356)
* use absolute path, just incase the data-dir is set relative
2022-01-05 19:12:13 +01:00
dgtlmoon
59d31bf76f XPath support (#355)
* XPath support and minor improvements to form validation
2022-01-05 17:58:07 +01:00
dgtlmoon
f87f7077a6 Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error (#354)
* Better handling of EmptyReply exception, always bump 'last_checked' in the case of an error, adds test
2022-01-05 14:13:30 +01:00
revilo951
f166ab1e30 Adding note in comments for working arm64 chrome with rPi-4 (#336) 2022-01-05 12:20:56 +01:00
Valtteri Huuskonen
55e679e973 fix typo in README.md (#350)
Fix spelling of Raspberry Pi.
2022-01-04 10:55:20 +01:00
dgtlmoon
e211ba806f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2022-01-03 20:16:51 +01:00
dgtlmoon
b33105d576 Re #348 - Add test for backup, use proper datastore path 2022-01-03 20:16:21 +01:00
dgtlmoon
b73f5a5c88 Update README.md 2022-01-03 18:46:50 +01:00
Unpublished
023951a10e Be sure that documents returned with a application/json header are not parsed with inscriptis (#337)
* Auto-detect JSON by Content-Type header
* Add test to not parse JSON responses with inscriptis
2022-01-02 22:35:33 +01:00
dgtlmoon
fbd9ecab62 Re #340 - snapshot should not be modified by ignore text (#344) 2022-01-02 22:35:04 +01:00
dgtlmoon
b5c1fce136 Re #133 Option for ignoring whitespacing (#345)
* Global setting option to ignore whitespace when detecting a change
2022-01-02 22:28:34 +01:00
dgtlmoon
489671dcca Re #342 notification encoding (#343)
* Re #342 - check for accidental python byte encoding of non-utf8/string, check return type of fetcher and fix encoding of notification content
2022-01-02 14:11:04 +01:00
dgtlmoon
d4dc3466dc Update README.md 2022-01-01 18:11:54 +01:00
dgtlmoon
0439acacbe Adding global ignore text (#339) 2022-01-01 14:53:08 +01:00
dgtlmoon
735fc2ac8e Adding new proxyType to selenium mappings 2021-12-31 10:48:11 +01:00
dgtlmoon
8a825f0055 Use selenium 4.1.0 2021-12-31 10:44:45 +01:00
dgtlmoon
d0ae8b7923 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-31 10:35:47 +01:00
dgtlmoon
a504773941 Bumping selenium version re https://github.com/dgtlmoon/changedetection.io/pull/331#issuecomment-1003323594 2021-12-31 10:35:29 +01:00
Calvin Bui
feb8e6c76c Add socksVersion mapping (#331) 2021-12-31 10:26:38 +01:00
dgtlmoon
a37a5038d8 Fix broken RSS link fields 2021-12-30 00:04:38 +01:00
dgtlmoon
f1933b786c RSS Link links you back to the difference UI/JS page, RSS Description is the page you're watching, and RSS Title is the page you're watching 2021-12-29 23:57:30 +01:00
dgtlmoon
d6a6ef2c1d Unify Filters and Triggers tabs into a single tab 2021-12-29 23:37:04 +01:00
dgtlmoon
cf9554b169 Move 'request type' field to the new 'Requests' tab 2021-12-29 23:31:53 +01:00
dgtlmoon
d602cf4646 Aligning call signatures #325 2021-12-29 23:28:34 +01:00
Simon Caron
dfcae4ee64 Extend Request Parameters to add Body & Method (#325) 2021-12-29 23:18:29 +01:00
dgtlmoon
e3bcd8c9bf Update README.md 2021-12-29 08:55:37 +01:00
dgtlmoon
c4990fa3f9 Create CONTRIBUTING.md 2021-12-28 18:59:43 +01:00
dgtlmoon
98461d813e Update README.md 2021-12-28 18:57:39 +01:00
dgtlmoon
8ec17a4c83 Re #267 - Pass settings for the proxy setup for webdriver (#326)
* Re #267 - Pass HTTP_PROXY as the proxy setup for webdriver
* Update README.md
2021-12-28 17:07:41 +01:00
dgtlmoon
ee708cc395 Update README.md 2021-12-28 13:19:24 +01:00
dgtlmoon
8a670c029a Update README.md 2021-12-28 13:18:44 +01:00
dgtlmoon
9fa5aec01e Update README.md 2021-12-28 00:47:00 +01:00
dgtlmoon
43c9cb8b0c 0.39.5 2021-12-27 23:46:29 +01:00
dgtlmoon
b6a359d55b Update feature_request.md 2021-12-27 13:50:38 +01:00
dgtlmoon
ae5a88beea Update issue templates 2021-12-27 13:49:07 +01:00
dgtlmoon
a899d338e9 Update bug_report.md 2021-12-27 13:48:02 +01:00
dgtlmoon
7975e8ec2e Update issue templates 2021-12-27 13:46:41 +01:00
dgtlmoon
ce383bcd04 W3C HTML validation issue around RSS icon 2021-12-27 10:55:43 +01:00
dgtlmoon
0b0cdb101b Closes #323 adds link to wiki 2021-12-27 10:14:40 +01:00
dgtlmoon
396509bae8 Update README.md 2021-12-22 10:43:22 +01:00
dgtlmoon
2973f40035 Update README.md 2021-12-22 10:42:48 +01:00
dgtlmoon
067fac862c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 23:17:48 +01:00
dgtlmoon
20647ea319 improve theming docs 2021-12-19 23:17:24 +01:00
dgtlmoon
fafc7fda62 Update README.md 2021-12-19 23:10:55 +01:00
dgtlmoon
b1aaf9f277 Update README.md 2021-12-19 23:04:56 +01:00
dgtlmoon
18987aeb23 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-19 18:17:37 +01:00
dgtlmoon
856789a9ba Closes #315 - Include library apprise Notify_mqtt 2021-12-19 18:16:51 +01:00
Iván
2857c7bb77 Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites (#312)
* set SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites

* Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
2021-12-16 12:13:47 +01:00
dgtlmoon
df951637c4 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-16 11:53:39 +01:00
dgtlmoon
ba6fe076bb Go back to docker hub 2021-12-16 11:53:28 +01:00
dgtlmoon
9815fc2526 RSS allow access via token (#310)
Allow access via a token
* New RSS URL
* Redirect the old RSS feed URL
* fix tests
2021-12-16 00:05:01 +01:00
dgtlmoon
e71dbbe771 Adding deploy to Heroku button 2021-12-15 23:32:48 +01:00
dgtlmoon
bd222c99c6 Adding heroku app.json app 2021-12-15 23:28:23 +01:00
dgtlmoon
4b002ad9e0 Tweak runtime Heroku version 2021-12-15 23:20:21 +01:00
dgtlmoon
fe2ffd6356 Tweaking heroku Procfile 2021-12-15 23:20:06 +01:00
dgtlmoon
266bebb5bc Adjust buildpacks on Heroku 2021-12-15 23:15:36 +01:00
dgtlmoon
115ff5bc2e Adding heroku python3 runtime config 2021-12-15 23:13:03 +01:00
dgtlmoon
dd6a24d337 Try simpler heroku recipe 2021-12-15 23:09:43 +01:00
dgtlmoon
f0d418d58c Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-15 23:07:32 +01:00
dgtlmoon
10d3b09051 -C option to create a datadir if it doesnt exist 2021-12-15 23:07:13 +01:00
dgtlmoon
35d0c74454 Re #308 - Adding test and including settings in clone operation (#309) 2021-12-15 19:54:30 +01:00
Glassed Silver
dd450b81ad fixing too small font in diff UI (#260)
* fixing too small font in diff UI , lower size from 12 to 11 in Part II
2021-12-15 19:21:25 +01:00
dgtlmoon
512d76c52b Update README.md
Make link more accurate
2021-12-10 20:21:27 +01:00
dgtlmoon
5a10acfd09 Send diff in notifications (#296) 2021-12-10 12:08:51 +01:00
dgtlmoon
a7c09c8990 Fix scrub form theme 2021-12-10 00:09:54 +01:00
dgtlmoon
9235eae608 Scrub dates: Fix date regex limit handler parsing 2021-12-10 00:09:42 +01:00
dgtlmoon
5bbd82be79 Wait 60 seconds or until stop_thread is set 2021-12-09 23:28:17 +01:00
dgtlmoon
7f8c0fb2fa Check that a notification URL is set when sending the test notification (#300) 2021-12-08 12:23:48 +01:00
Tristan Hill
489eedf34e Flask 2 (#299)
Co-authored-by: Tristan Hill <t+git@eaux.uk>
2021-12-07 23:23:23 +01:00
dgtlmoon
3956b3fd68 Re #269 - Show current/correct BASE_URL information (#271)
* Re #269 - Show current/correct BASE_URL information
2021-12-04 15:23:23 +01:00
dgtlmoon
61c1d213d0 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-12-04 14:48:18 +01:00
dgtlmoon
e07f573f64 Re #269 - Fix env var comment name 2021-12-04 14:47:46 +01:00
ghjklw
ecba130fdb Enable Markdown and HTML notifications. (#288)
This change enable defining the notification body as HTML or Markdown. This can be very
useful to have more user-friendly notifications such as:
* applying a heading style to the `{watch_title}` to make it stand out
* creating clickable links using the `{watch_url}`, `{preview_url}` and `{diff_url}`.

Changes
=======
* Add a `notification_format` to the notification settings, defaults to plain text.
* Use the `body_format` parameter of Apprise's `notify` method.

Co-authored-by: Malo Jaffré <malo.jaffre@dunnhumby.com>
2021-12-04 14:41:48 +01:00
dgtlmoon
ff6dc842c0 0.39.4 release 2021-12-02 22:54:38 +01:00
dgtlmoon
4659993ecf Re #286 - Solving lost data/corrupted data - Tweak timing and try to write to a temp file first (#292)
* Re #286 - Tweak timing and try to write to a temp file first, Increase logging and format info message better.
2021-12-02 22:48:44 +01:00
jeremysherriff
0a29b3a582 Fix element paths when using reverse proxy subfolder (#272) 2021-11-12 11:34:19 +01:00
dgtlmoon
c55bf418c5 0.39.3 release 2021-10-28 11:32:33 +02:00
dgtlmoon
4bbb7d99b6 Re #264 - fixing clone watch operation 2021-10-28 11:29:59 +02:00
dgtlmoon
a8e92e2226 Re #265 - extended jsonpath support (#266)
* Re #265 - Use extended JSONpath support,
Allow a JSONPath selector to not match anything (yet)
Adding test
Correctly capture invalid JSONPath query error
2021-10-27 09:24:08 +02:00
dgtlmoon
c17327633f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-26 22:32:29 +02:00
dgtlmoon
56d1dde7c3 Re #265 - wasnt catching the jsonpath exception due to invalid jsonpath expressions properly 2021-10-26 22:30:58 +02:00
dgtlmoon
6e4ddacaf8 Re #257 - Handle bool val of json path better (#263)
* Re #257 - Handle bool val of json path better, with test
2021-10-21 23:25:38 +02:00
dgtlmoon
3195ffa1c6 Re #249 - Add EXPOSE 5000 to Dockerfile 2021-10-06 22:28:35 +02:00
dgtlmoon
c749d2ee44 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-10-06 20:51:38 +02:00
dgtlmoon
ec94359f3c Provide better combination of chardet and urllib3 2021-10-06 20:51:05 +02:00
dgtlmoon
4d0bd58eb1 Prefer GHCR.io over DockerHub (#245)
* Prefer GHCR.io over DockerHub (DockerHub limits pulls)
2021-10-06 13:07:56 +02:00
dgtlmoon
3525f43469 Limit branches/tags of container build
Limit branch
2021-10-06 12:27:02 +02:00
dgtlmoon
d70252c1eb Re #213 - Adding screensize examples to selenium container 2021-10-06 11:34:24 +02:00
dgtlmoon
b57b94c63a Be more specific about tagged release builds 2021-10-06 11:28:39 +02:00
dgtlmoon
9e914c140e Fix :latest release worflow syntax check 2021-10-06 10:27:03 +02:00
dgtlmoon
5d5ceb2f52 Form helper - explain where the webdriver setting comes from 2021-10-06 09:27:41 +02:00
dgtlmoon
bc0303c5da Rename workflow name 2021-10-06 08:59:03 +02:00
dgtlmoon
1240da4a6e Just 'published' and 'edited' package release is enough (remove 'created') 2021-10-06 08:52:10 +02:00
dgtlmoon
4267bda853 Fixing workflow tag syntax issues 2021-10-06 08:49:33 +02:00
dgtlmoon
db1ff1843c fix broken workflow syntax 2021-10-06 08:45:05 +02:00
dgtlmoon
fe3c20b618 add step for metadata debug, see if it runs by checking workflow tag name 2021-10-06 08:42:40 +02:00
dgtlmoon
2fa93cba3a Container build/push doesnt need to be so specific 2021-10-05 22:09:12 +02:00
dgtlmoon
254fbd5a47 Oops on/release was in the wrong block 2021-10-05 19:13:45 +02:00
dgtlmoon
18f2318572 release also on edited, published 2021-10-05 19:05:09 +02:00
dgtlmoon
84417fc2b1 Run workflow on release 2021-10-05 19:02:05 +02:00
dgtlmoon
7f7fc737b3 Use a better switch mechanism for build type 2021-10-05 18:48:54 +02:00
dgtlmoon
2dc43bdfd3 version 0.39.2 2021-10-05 18:21:40 +02:00
dgtlmoon
95e39aa727 Configurable BASE_URL (#228)
Re #152 ability to over-ride env var BASE_URL, with UI+tests
2021-10-05 18:15:36 +02:00
dgtlmoon
2c71f577e0 Split python pip builder to its own release based workflow 2021-10-05 17:01:34 +02:00
dgtlmoon
f987d32c72 remove accidental syntax add 2021-10-05 17:01:26 +02:00
dgtlmoon
cd7df86f54 Re #242 - app was treating notification field defaults as the field value (#244) 2021-10-05 14:33:57 +02:00
dgtlmoon
cb8fa2583a attempt to re-enable docker layer cache 2021-10-05 11:48:09 +02:00
dgtlmoon
3d3e5db81c Forgot GHCR tag with version 2021-10-05 11:43:56 +02:00
dgtlmoon
c9860dc55e Limit container build to releases and master 2021-10-05 11:13:23 +02:00
dgtlmoon
dbd5cf117a Fix GHCR login 2021-10-05 10:47:50 +02:00
dgtlmoon
e805d6ebe3 Use the same workflow for tag and release 2021-10-05 10:40:28 +02:00
dgtlmoon
01f469d91d Drop redundant build workflow 2021-10-05 10:33:15 +02:00
dgtlmoon
e91cab0c6d try :latest and :tag in same workflow run 2021-10-05 10:28:27 +02:00
dgtlmoon
106c3269a6 Separate workflows 2021-10-05 10:16:23 +02:00
dgtlmoon
1628602860 Docker image build issues (#243)
Pin cryptography ~= 3.4, fixes build issues for multiplatform docker buildx, and a little tidy up of github workflows.
2021-10-05 10:09:10 +02:00
dgtlmoon
ca0ab50c5e Re #239 - Individual GUID for watch+changeevent (#241)
* Re #239 - Individual GUID for watch+changeevent
2021-10-04 08:34:10 +02:00
dgtlmoon
df0b7bb0fe Update README.md
Re #240 return update instructions
2021-10-03 19:25:50 +02:00
dgtlmoon
fe59ac4986 Re #232 - Use a copy of the datastore incase it changes while we iterate through it (#234) 2021-09-23 18:27:16 +02:00
dgtlmoon
25476bfcb2 Setting for Extract <title> as title option on individual watches (#229)
* Extract <title> as title option on individual items
2021-09-19 22:57:15 +02:00
dgtlmoon
6901fc493d Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-18 10:33:09 +02:00
dgtlmoon
c40417ff96 GitHub repo build platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7 2021-09-18 10:32:45 +02:00
dgtlmoon
fd2d938528 GitHub container repo (#227) 2021-09-18 00:11:54 +02:00
dgtlmoon
cd20dea590 Remove extra build step 2021-09-17 23:50:58 +02:00
dgtlmoon
f921e98265 push github container master also 2021-09-17 23:44:57 +02:00
dgtlmoon
c0e905265c Tidy up workflow names 2021-09-17 23:38:50 +02:00
dgtlmoon
5e6a923c35 Attempt to setup GitHub Container Registry 2021-09-17 23:37:28 +02:00
dgtlmoon
7618081e83 v0.39.1 2021-09-17 18:40:16 +02:00
dgtlmoon
b903280cd0 Re #185 - [feature] Custom notifications templates per watch (#226)
* Re #185 - [feature] Custom text templates for the notification per monitored entry as override.
Bonus points: Adding validation for apprise URLs
2021-09-17 18:37:26 +02:00
dgtlmoon
5b60314e8b Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-17 16:03:06 +02:00
dependabot[bot]
dfd34d2a5b Bump tar from 6.1.6 to 6.1.9 in /changedetectionio/static/styles (#209)
Bumps [tar](https://github.com/npm/node-tar) from 6.1.6 to 6.1.9.
- [Release notes](https://github.com/npm/node-tar/releases)
- [Changelog](https://github.com/npm/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-tar/compare/v6.1.6...v6.1.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-17 16:02:59 +02:00
dgtlmoon
98f6f0c80d Re #225 - Notifications refactor token replacement fix possible missing value for watch_title 2021-09-17 16:02:54 +02:00
dgtlmoon
8c65c60c27 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-09-17 16:00:49 +02:00
dgtlmoon
bd0d9048e7 Re #42 - Notifications refactor token replacement fix possible missing value for watch_title 2021-09-17 15:58:04 +02:00
dependabot[bot]
3b14be4fef Bump tar from 6.1.6 to 6.1.9 in /changedetectionio/static/styles (#209)
Bumps [tar](https://github.com/npm/node-tar) from 6.1.6 to 6.1.9.
- [Release notes](https://github.com/npm/node-tar/releases)
- [Changelog](https://github.com/npm/node-tar/blob/main/CHANGELOG.md)
- [Commits](https://github.com/npm/node-tar/compare/v6.1.6...v6.1.9)

---
updated-dependencies:
- dependency-name: tar
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2021-09-03 16:17:45 +02:00
Matthias Langhard
05f7e123ed Adds 'Create Copy' feature to clone a watch (#184) 2021-08-26 22:10:17 +02:00
dgtlmoon
54d80ddea0 adding specific test (#205)
Regex UUID test
Co-authored-by: Minty <mmeminty@gmail.com>
2021-08-23 08:52:49 +02:00
Minty
b9e0ad052f New notification tokens - watch_uuid, watch_title, watch_tag, (#201)
* New notification tokens ; Tokens added: watch_uuid, watch_title, watch_tag, updated settings description
2021-08-22 22:36:10 +02:00
dgtlmoon
f8937e437a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-08-22 18:49:29 +02:00
dgtlmoon
fbe9270528 Re #203 - validate tokens (#204)
* Re #203 - validate tokens
2021-08-22 18:45:32 +02:00
dgtlmoon
58c3bc371d No point hiding the notifications customisation area because it's now in its own tab 2021-08-22 18:44:51 +02:00
dgtlmoon
4683b0d120 Update README.md 2021-08-20 18:24:49 +02:00
dgtlmoon
5fb9bbdfa3 Test - prove that notifications are not being sent when content does not change 2021-08-19 18:58:30 +02:00
dgtlmoon
5883e5b920 remove quotes from env vars 2021-08-19 16:55:28 +02:00
dgtlmoon
b99957f54a Re https://github.com/dgtlmoon/changedetection.io/discussions/189
A note to not use quotes in env parts
2021-08-19 16:43:41 +02:00
dgtlmoon
21cb7fbca9 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-08-19 16:37:00 +02:00
dgtlmoon
4ed5d4c2e7 WebDriver fetcher - settings - when an alternative one is configured, show it in the label 2021-08-19 16:36:29 +02:00
dgtlmoon
8c3163f459 Update README.md 2021-08-16 16:34:44 +02:00
dgtlmoon
a11b6daa2e Installation via pip (#186)
Builder for https://pypi.org/project/changedetection.io/
2021-08-16 15:24:37 +02:00
dgtlmoon
642ad5660d Update README.md 2021-08-16 13:57:43 +02:00
dgtlmoon
252d6ee6fd Trigger text/wait (#187)
Re #71 - Ability to set filters
2021-08-16 13:13:17 +02:00
dgtlmoon
ba7b6b0f8b Reword group tag - more obvious name 2021-08-15 22:16:18 +02:00
dgtlmoon
f2094a3010 Fix img alt/title accesibility for pause icon 2021-08-15 21:53:47 +02:00
dgtlmoon
b9ed7e2d20 Let the fetcher throw an exception which will be caught and handed to the operator anyway 2021-08-12 12:56:26 +02:00
dgtlmoon
6d3962acb6 Example placeholder was pushed out 2021-08-12 12:56:12 +02:00
dgtlmoon
32a0d38025 Move fetcher tab back to general - save space on mobile 2021-08-12 12:51:43 +02:00
dgtlmoon
df08d51d2a WebDriver test fetch should use environment var too 2021-08-12 12:33:31 +02:00
dgtlmoon
d87c643e58 Add fetch option to each watch 2021-08-12 12:28:17 +02:00
dgtlmoon
9e08f326be Chrome/Webdriver support for Javascript websites (#114)
JS Support via fetching the page over WebDriver/Selenium network
Refactor forms (Split into logical tabs)
2021-08-12 12:05:59 +02:00
dgtlmoon
1f821d6e8b Fixing tar npm security issue npm install "tar@>=6.1.2" 2021-08-07 14:20:13 +02:00
dgtlmoon
00fe4d4e41 tag 0.38.2 2021-08-07 14:18:28 +02:00
dgtlmoon
f88561e713 Re #172 - be sure that we are non-greedy matching the first : when splitting the headers so we dont break "Cookie" header (#175) 2021-08-07 14:15:41 +02:00
dgtlmoon
dd193ffcec Update heroku.yml
Re #156 - You can specify the port here too, to be sure
2021-07-28 17:18:10 +02:00
dgtlmoon
1e39a1b745 Re #156 - PORT should always be an Integer 2021-07-28 13:59:50 +02:00
Leigh
1084603375 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-26 07:11:54 +02:00
Leigh
3f9d949534 Re #159 - Adding env var example to docker-config.yml 2021-07-26 07:10:57 +02:00
Tim Chepeleff
684deaed35 Add Heroku Deployment Support (#159)
* add heroku.yml

* Use environment supplied port

* Update changedetection.py
2021-07-26 06:43:23 +02:00
Leigh
1b931fef20 Re #154 - Handle missing JSON better 2021-07-25 13:55:28 +02:00
Leigh
d1976db149 high res 2021-07-25 09:18:05 +02:00
Leigh
a8fb17df9a higher res screenshot 2021-07-25 09:17:02 +02:00
Leigh
8f28c80ef5 Update screenshot 2021-07-25 09:16:07 +02:00
Leigh
5a2c534fde Assert that html_tools.JSONNotFound is correctly raised 2021-07-25 07:22:29 +02:00
dgtlmoon
e2304b2ce0 Re #154 Ldjson extract parse (#158)
* Use parsable JSON hiding in <script type="application/ld+json"> where possible, if it matches the filter rule, use it.
* Update README.md
2021-07-25 07:02:19 +02:00
dgtlmoon
b87236ea20 Responsive fix for input field on mobile 2021-07-22 21:39:41 +10:00
dgtlmoon
dfbc9bfc53 Re #148 - Always set something for {base_url} so we dont send possibly an empty body/title notification which could break some services. 2021-07-22 20:09:42 +10:00
dgtlmoon
f3ba051df4 Add medium-size-desktop class to notification custom title 2021-07-22 20:06:27 +10:00
dgtlmoon
affe39ff98 Notification default: Make sure to use atleast some text here, a blank notification body could be problematic for some services 2021-07-22 20:00:51 +10:00
dgtlmoon
0f5d5e6caf Re #150 - stop using 'size' across all elements and rely on CSS for a better mobile experience (stops fields from pushing out) 2021-07-22 19:38:10 +10:00
Preston
2a66ac1db0 fix: setting overflow in mobile view (#150) 2021-07-22 18:54:01 +10:00
dgtlmoon
07308eedbd Re #121, #123 - Show the current base_url value 2021-07-22 10:52:29 +10:00
dgtlmoon
750b882546 Re #149 - allow empty timestamp limit for scrub operation 2021-07-22 10:32:42 +10:00
dgtlmoon
1c09407e24 Dont show "new version available" message when password is enabled and user is logged out 2021-07-21 21:47:42 +10:00
dgtlmoon
7e87591ae5 test fix - dont trigger notifications in header test 2021-07-21 20:31:52 +10:00
dgtlmoon
9e6c2bf3e0 Strengthen the notification tests 2021-07-21 20:21:12 +10:00
dgtlmoon
c396cf8176 Re #137 - Adding test to confirm that headers are not repeated 2021-07-21 19:51:12 +10:00
dgtlmoon
b19a037fac Add debug output to notify loop 2021-07-21 13:13:31 +10:00
dgtlmoon
5cd4a36896 Add note to field 2021-07-21 13:05:30 +10:00
dgtlmoon
aec3531127 Cleanup test helper data before and after running 2021-07-21 12:49:32 +10:00
dgtlmoon
78434114be Improve debug info 2021-07-21 12:49:22 +10:00
dgtlmoon
f877cbfe8c 0.38.1 tag 2021-07-20 17:57:27 +10:00
dgtlmoon
fe4963ec04 Re #143 - Remove old notification test code, fix form handler (#145)
* Re #143 - global notification settings box fix - Remove old notification test code, fix form handler, add test
2021-07-20 17:44:01 +10:00
dgtlmoon
32a798128c Update README.md 2021-07-18 18:15:44 +10:00
dgtlmoon
cf4e294a9c Re #135 - refactor the quick add widget (#136)
* Re #135 - refactor the quick add widget

* Fix W3C validation issues
2021-07-18 13:26:23 +10:00
Richard Schwab
b008269a70 Partially revert 47e5a7cf09 (#138)
Copy HTTP headers from the global template instead of updating the global template when fetching a site.

fixes #137
2021-07-18 10:12:23 +10:00
dgtlmoon
50026ee6d9 use a github action for getting the tag 2021-07-16 16:24:01 +10:00
dgtlmoon
aa5ba7b3a9 rename tag build runner 2021-07-16 16:19:04 +10:00
dgtlmoon
4110d05bf8 fix tag 2021-07-16 16:12:03 +10:00
dgtlmoon
6c02bc9cd3 build and push tag 2021-07-16 16:03:45 +10:00
dgtlmoon
0a9b5f801f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-14 20:21:14 +10:00
dgtlmoon
b4630d4200 Re #76 - Fixing links 2021-07-14 20:20:47 +10:00
dgtlmoon
2238b7d660 Cleaner is to let flexbox overflow and scroll on the X where needed 2021-07-14 14:46:18 +10:00
dgtlmoon
e6fadc44fa #76 app path prefix when behind proxy_pass (#91)
Support for running in a sub-path under proxy_pass (Running changedetection.io behind a reverse proxy sub directory) 
More here https://github.com/dgtlmoon/changedetection.io/wiki/Running-changedetection.io-behind-a-reverse-proxy-sub-directory
2021-07-14 14:02:24 +10:00
dgtlmoon
c0b6233912 Settings: Remove password link fix 2021-07-14 13:38:32 +10:00
dgtlmoon
9669f8248e Make sure right menu is still visible when URL is long 2021-07-14 13:36:58 +10:00
dgtlmoon
b2b8958f7b 0.38 release 2021-07-14 11:51:33 +10:00
dgtlmoon
83daa6f630 Re #132 - Make a list of the JSONpath results instead of using only the first value 2021-07-14 11:15:32 +10:00
dgtlmoon
dad48402f1 Customisable notifications (#123)
* Customisable notifications (#121)
* Test improvements
* Setup BASE_URL environment in test

Co-authored-by: dtomlinson91 <53234158+dtomlinson91@users.noreply.github.com>
2021-07-13 18:48:21 +10:00
dgtlmoon
655a350f50 Re #117 - dont re-encode single value types, looks better in the diff 2021-07-12 18:27:03 +10:00
dgtlmoon
ae0fc5ec0f Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-11 23:05:22 +10:00
dgtlmoon
851142446d Usability tweak - [edit] on diff page should go back to diff page 2021-07-11 22:56:43 +10:00
dgtlmoon
dc2896c452 Update README.md 2021-07-11 22:11:53 +10:00
dgtlmoon
306814f47f Adding text about JSON API Monitoring 2021-07-11 22:10:49 +10:00
dgtlmoon
e073521f4d Re #117 Jsonpath based JSON change detection filter (#125)
* Re #117 - Experimental JSON selector support by using 'json:' prefix and any JSONpath rule
2021-07-11 22:07:39 +10:00
dgtlmoon
f2643c1b65 Update README.md 2021-07-11 19:38:54 +10:00
dgtlmoon
0e291de045 Update README.md 2021-07-11 19:36:44 +10:00
dgtlmoon
2f22d627fa Use right sticky for version 2021-07-10 23:14:59 +10:00
dgtlmoon
cd622261e9 Re #118 - Make 'show current version' more obvious 2021-07-10 23:07:46 +10:00
dgtlmoon
39a696fc7c Diff page - use the document title in <title> for better bookmarking 2021-07-10 16:31:16 +10:00
dgtlmoon
db5afa1fa2 node-sass 6.0.1 works with node-sass watch way better 2021-07-06 23:04:40 +10:00
dgtlmoon
56c56c63e8 Updating inscriptis/text/html library to 1.2 2021-07-04 23:09:49 +10:00
dgtlmoon
cb0d69801f Update readme with the branch link for javascript support 2021-07-04 13:51:19 +10:00
dgtlmoon
99ddc0490b Updating trim-newlines packages 2021-07-03 12:04:26 +10:00
dgtlmoon
b27d03e8c7 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-07-02 20:19:26 +10:00
dgtlmoon
f852bdda0e 0.37 release 2021-07-02 20:18:41 +10:00
dgtlmoon
b85af8904a #110 global recheck time (#113)
* Re #106 - handling empty title with gettr cleanup

* Re #110 - Global recheck time improvements, add tests, add form feedback, follow default minutes

* Adding comments
2021-07-02 12:14:09 +10:00
dgtlmoon
db18866b0a Re #106 - handling empty title with gettr cleanup (#107) 2021-06-27 12:29:41 +10:00
dgtlmoon
3fa6bc5ffd Update README.md
Adding more docker start help
2021-06-26 13:34:40 +10:00
dgtlmoon
25185e6d00 Auto extract html title as title (#102)
* Auto extract <title> as watch title, Minor refactor for html tooling
2021-06-24 19:10:19 +10:00
dgtlmoon
9af1ea9fc0 Bug fix - Check 'minutes_between_check' is set 2021-06-24 11:26:16 +10:00
dgtlmoon
aa51c7d34c tweak <pre> text wrapping when displaying diff 2021-06-23 21:05:22 +10:00
dgtlmoon
f215adbbe5 CSS Filter - Smarter is to just extract the HTML blob and continue with inscriptus, so we have almost the same output as not using the filter 2021-06-23 20:40:01 +10:00
dgtlmoon
8d59ef2e10 CSS Filter - restore nicer linefeeds 2021-06-23 12:52:04 +10:00
dgtlmoon
e3a9847f74 @todo Comment - BS4's element.get_text() seems to lose the indentation format no-matter what 2021-06-23 12:49:53 +10:00
dgtlmoon
47f7698b32 CSS Filter - strip text of whitespacing, preserve new lines where applicable, remove extra newlines 2021-06-23 12:29:14 +10:00
dgtlmoon
c6a4709987 Include statistics for number of watches 2021-06-22 11:40:45 +10:00
dgtlmoon
6c35995cff Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-22 11:21:29 +10:00
dgtlmoon
fa6c31fd50 Set edit-form for settings+watch to always be wide 2021-06-22 11:20:51 +10:00
dgtlmoon
58dfeaeec8 Update README.md 2021-06-22 10:33:27 +10:00
dgtlmoon
f717ad1bb6 0.36 2021-06-22 10:23:58 +10:00
dgtlmoon
8a0b33c1e8 Re #42 - dont use blank titles 2021-06-22 10:21:53 +10:00
dgtlmoon
f762d889f9 Re #100 - Fixing storage of minutes_between_check and adding automated test for field storage 2021-06-22 10:16:56 +10:00
dgtlmoon
d82465d428 0.35 2021-06-22 00:28:41 +10:00
dgtlmoon
74cf72c9cd Time between rechecks is always stored as minutes 2021-06-22 00:25:34 +10:00
dgtlmoon
03c1ad3989 Ability to reset app password by placing a file called removepassword.lock into your data directory and restarting the instance 2021-06-21 22:57:48 +10:00
dgtlmoon
ed7c2f01da Adding tests for password control handling 2021-06-21 22:36:09 +10:00
dgtlmoon
0923aa5b73 Remove unused field (removepassword is actually a link) 2021-06-21 22:32:59 +10:00
dgtlmoon
04acd8b2f8 0.34 2021-06-21 22:13:14 +10:00
dgtlmoon
45bd454e26 Be sure not to use blank passwords as the password 2021-06-21 22:12:47 +10:00
dgtlmoon
a429223858 Re #42 - custom title (#98) 2021-06-21 21:44:58 +10:00
dgtlmoon
59eb83974e Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 20:08:42 +10:00
dgtlmoon
d4928e34eb 0.33 2021-06-21 20:07:04 +10:00
dgtlmoon
8bcc277310 Re #92 - Re-use existing [preview] function for viewing current (#97) 2021-06-21 19:35:13 +10:00
dgtlmoon
53b9640ac5 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 17:32:06 +10:00
dgtlmoon
854520005d #81 - Regex support (#90)
* Re #81 - Regex support
* minor cleanup
2021-06-21 17:17:22 +10:00
dgtlmoon
4dbfd376f2 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-21 16:21:30 +10:00
dgtlmoon
af24079053 Use wtforms handler (#96)
Refactor forms and styling with wtforms
2021-06-21 16:21:05 +10:00
dgtlmoon
a91c4dbe92 Re #95 - Include PUID/PGID example 2021-06-21 10:03:08 +10:00
dgtlmoon
3f9fab3944 re-enable tests 2021-06-21 09:48:52 +10:00
dgtlmoon
1772568559 On settings submit, display saved message 2021-06-19 10:20:48 +10:00
dgtlmoon
fa3ce97634 Use flasks' built in 'flash' method instead of a custom message/notices (#94)
* Use flasks' built in 'flash' method instead of a custom message/notice handler

* Move app.secret_key setup to inside app
2021-06-18 22:04:29 +10:00
dgtlmoon
fed2de66a0 Adding rPi support info 2021-06-18 22:00:33 +10:00
dgtlmoon
e761405f58 Re #92 Adding link to CSS selector help in wiki 2021-06-18 11:37:27 +10:00
dgtlmoon
23738c98bc Re #93 - tweak build packages 2021-06-17 23:04:51 +10:00
dgtlmoon
07c7663e56 Re #93, #79 - docker image multistage build lost the packages required for rPi etc 2021-06-17 22:42:07 +10:00
Leonardo Brondani Schenkel
cec45a7ad7 Strip surrounding whitespace from elements (#89) 2021-06-16 13:57:22 +10:00
dgtlmoon
dc62bcdfca Queue an entry for immediate recheck after [edit] 2021-06-16 13:38:01 +10:00
dgtlmoon
d304449cb1 Adding helper method to remove text files that are not in the index 2021-06-16 10:57:55 +10:00
dgtlmoon
878584f043 Fix typo 2021-06-15 14:34:10 +10:00
dgtlmoon
b4fa7d2089 Re #88 - placeholder text on CSS rule 2021-06-15 14:13:01 +10:00
dgtlmoon
b0592df3cb Re #86 - fix typo 2021-06-15 11:14:16 +10:00
dgtlmoon
ddd8bd34f2 0.32 release 2021-06-15 09:50:24 +10:00
dgtlmoon
afea79adf9 Sassify the diff page 2021-06-14 21:04:06 +10:00
dgtlmoon
444510c9ca "Sassify" the theme, easier to manage 2021-06-14 20:42:42 +10:00
dgtlmoon
1f1d2708c6 Mobile fixes (#87)
#48 - Settings page on android didnt work
- Responsive table layout for the watch list
- Few more improvements
2021-06-14 19:40:41 +10:00
dgtlmoon
bae6641777 Re #86 - Refactor scrub date limit code 2021-06-14 17:56:09 +10:00
dgtlmoon
17830de489 Tweak comments 2021-06-13 11:02:11 +10:00
dgtlmoon
0acf9cc9cb Re #77 - Repair and refactor time threshold check code 2021-06-13 10:59:15 +10:00
khakers
cff8959462 Modifies Dockerfile to use multistage builds (#79) 2021-06-08 12:30:45 +10:00
dgtlmoon
4b6522469b Bumping to 0.31 2021-06-05 16:37:16 +10:00
dgtlmoon
609a0a3aad Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-06-03 10:51:18 +10:00
dgtlmoon
ad8065c072 Re #75 - Adding test to confirm watched URL appears in RSS feed 2021-06-03 10:50:59 +10:00
dgtlmoon
2346b42ef2 CSS selector filter (#73)
* Re #9 CSS Selector filtering,  Adding test for #9
2021-05-30 21:22:26 +10:00
dgtlmoon
1a0c3f1250 Fixing var name 2021-05-28 10:27:01 +10:00
dgtlmoon
91f69b92a2 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-05-28 10:20:53 +10:00
dgtlmoon
dd211d166c Include release metadata during github build 2021-05-28 10:20:42 +10:00
dgtlmoon
a6b0a23143 Update README.md 2021-05-28 00:03:17 +10:00
dgtlmoon
a03e53d826 Re #40 Ability to set individual timers (#72)
* Re #40 Ability to set individual timers
2021-05-27 23:55:05 +10:00
dgtlmoon
5d93009605 Update README.md 2021-05-27 20:37:56 +10:00
dgtlmoon
d4f3e744de Improvements for backup (#70)
* Remove previous backup files

* Backup - Add a text file containing only the URLs, with Windows+UNIX line-endings, for better portability.

* Fix filename on backup not being correct
2021-05-27 20:16:40 +10:00
dgtlmoon
13de31cf98 Update README.md 2021-05-26 21:26:35 +10:00
dgtlmoon
54ae82395a Disable image layer cache service 2021-05-25 16:46:13 +10:00
dgtlmoon
dba8944625 Re-enable ARM v6/v7 builds 2021-05-25 16:08:01 +10:00
dgtlmoon
270343b276 Install requirements, remove rust and dev packages that are no longer needed, hopefully for a smaller docker layer size 2021-05-25 15:06:35 +10:00
dgtlmoon
f3ce9b732c Remove rust build comments 2021-05-25 15:05:36 +10:00
dgtlmoon
baaee30499 Arm build fixes (#68)
* Add rustc compiler and remove when not needed (smaller docker layer)

* Using the magical ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1 to get around ARM issues
2021-05-25 15:04:33 +10:00
dgtlmoon
d50ff0b31c Re #65 - Append BASE_URL env var to the notification if it is set (#66)
* Re #65 - Append BASE_URL env var to the notification if it is set
2021-05-21 09:16:19 +10:00
dgtlmoon
395a6fca62 Update README.md 2021-05-19 13:09:41 +10:00
dgtlmoon
f582810ad0 Adding BTC support instructions 2021-05-18 23:34:56 +10:00
dgtlmoon
18b71edd6d Switch to just amd64 for now due to apprise not building on ARM 2021-05-15 21:23:05 +10:00
dgtlmoon
28f6af9153 Fixing syntax 2021-05-15 18:20:34 +10:00
dgtlmoon
63a3492547 Re #49 Re #60 - Adding more information about proxy setup to README.md 2021-05-15 18:13:00 +10:00
Unpublished
454fc26341 Add socks proxy support (#60)
* Add socks proxy support

* Add proxy config to README
2021-05-15 18:05:58 +10:00
KibosJ
e5409f8d16 Created docker-compose file (#55)
* Created docker-compose file, Removed version tag as per latest compose specification
2021-05-15 11:48:38 +10:00
dgtlmoon
1b736b3726 Re #58 - reduce to 1 minute (a small rewrite is required to change the backend to store in 'seconds' instead of minutes) 2021-05-13 22:33:33 +10:00
dgtlmoon
96f2b0d248 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-05-13 22:24:45 +10:00
dgtlmoon
308527f45e 56 - Fix notification test 2021-05-13 22:23:49 +10:00
dgtlmoon
70d766b647 Update README.md 2021-05-08 23:16:16 +10:00
dgtlmoon
40be9c615f Update README.md 2021-05-08 23:15:50 +10:00
dgtlmoon
f380754ff5 Adding rust compiler :( 2021-05-08 12:27:39 +10:00
dgtlmoon
bee6bd9fe0 trying without libssl and only libffi 2021-05-08 12:17:28 +10:00
dgtlmoon
fec2862ebe Adding extra libs required for build 2021-05-08 12:01:52 +10:00
dgtlmoon
969420e40b Cleanup docs 2021-05-08 11:44:43 +10:00
dgtlmoon
afba06dd1f Tweak workflow (tests) 2021-05-08 11:38:27 +10:00
dgtlmoon
1d66160e8c Security update 2021-05-08 11:33:46 +10:00
dgtlmoon
f877af75b9 Apprise notifications (#43)
* issue #4 Adding settings screen for apprise URLS
* Adding test notification mechanism

* Move Worker module to own class file

* Adding basic notification URL runner

* Tests for notifications

* Tweak readme with notification info

* Move notification test to main test_backend.py

* Fix spacing

* Adding notifications screenshot

* Cleanup more files from test

* Offer send notification test on individual edits and main/default

* Process global notifications

* All branches test

* Wrap worker notification process in try/catch, use global if nothing set

* Fix syntax

* Handle exception, increase wait time for liveserver to come up

* Fixing test setup

* remove debug

* Split tests into their own totally isolated setups, if you know a better way to make live_server() work, MR :)

* Tidying up lint/imports
2021-05-08 11:29:41 +10:00
dgtlmoon
b752690f89 Fixing security update 2021-05-08 10:19:49 +10:00
dgtlmoon
a10efa951b Also detect pytest in the environ (for local debug) 2021-05-03 11:20:11 +10:00
dgtlmoon
24a38f26f8 Prepend 'test-' when runnning under pytest to guid 2021-05-03 11:03:00 +10:00
dgtlmoon
1d0018dced - Relabel login button
- misc test cleanup
2021-05-01 11:55:24 +10:00
dgtlmoon
18c7a18be8 Re #46 - Add note to README.md about Javascript support 2021-05-01 10:02:43 +10:00
dgtlmoon
c11adcbe4a Bumping version 2021-05-01 01:20:56 +10:00
dgtlmoon
cd6ce89587 Re #45 - Set datastore path in app.config 2021-05-01 01:18:59 +10:00
dgtlmoon
4164ad29e3 Re #44 - Broke the menu by accident, adding tests and fixing. 2021-04-30 19:54:23 +10:00
dgtlmoon
4953e253e9 bump to 0.29 2021-04-30 17:17:23 +10:00
dgtlmoon
64e172433a docker-compose for dev not needed (use venv etc) 2021-04-30 16:54:07 +10:00
dgtlmoon
92c0fa90ee Password protection / login support (#34)
Issue #24 Password login  hashlib.pbkdf2_hmac implementation
2021-04-30 16:47:13 +10:00
dgtlmoon
ee8053e0e8 Update FUNDING.yml 2021-04-21 11:13:50 +10:00
dgtlmoon
7f5b592f6f Skip using tag limit on pause when no tag is being viewed 2021-04-16 10:29:03 +10:00
dgtlmoon
1e45156bc0 Pause/Unpause should respect limit tag on redirect 2021-04-10 19:47:31 +09:30
dgtlmoon
c7169ebba1 Validate duplicate URLs 2021-04-10 14:31:57 +09:30
dgtlmoon
a58679f983 Chdir is not needed because we add the file from the full path, but make it 'relative' in the Zip 2021-04-09 04:50:55 +02:00
dgtlmoon
661542b056 Fix backup generation on relative paths (like when run outside docker, under venv, etc) 2021-04-09 04:49:50 +02:00
dgtlmoon
2ea48cb90a Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-04-04 06:32:04 +02:00
dgtlmoon
2a80022cd9 Adding noopener per CodeQL, stop pages from knowing the referer etc 2021-04-04 06:31:42 +02:00
dgtlmoon
8861f70ac4 Create codeql-analysis.yml 2021-04-04 06:27:32 +02:00
dgtlmoon
07113216d5 yarl not needed, lock requests version 2021-04-03 10:28:11 +02:00
dgtlmoon
02062c5893 dev packages needed, drop apt cache 2021-04-03 09:05:02 +02:00
dgtlmoon
a11f09062b See if we get a clean buildx without dev packages 2021-04-03 08:45:24 +02:00
dgtlmoon
0bb48cbd43 Tweaking build size thanks to https://github.com/hadolint/hadolint 2021-04-03 08:04:42 +02:00
dgtlmoon
7109a17a8e Adding dockerignore 2021-04-03 07:59:22 +02:00
dgtlmoon
4ed026aba6 Re #18 - Show "preview" of the page when only one revision exists (#33) 2021-04-03 05:55:43 +02:00
dgtlmoon
3b79f8ed4e Update README.md 2021-04-02 05:00:58 +02:00
dgtlmoon
5d02c4fe6f Update README.md 2021-04-02 04:58:49 +02:00
dgtlmoon
f2b06c63bf Also check that the watch is not paused before putting it into the checking queuex 2021-04-02 03:58:23 +02:00
dgtlmoon
ab6f4d11ed revert c60be56271 2021-04-02 03:07:36 +02:00
dgtlmoon
5311a95140 remove extra packages (#32)
* remove extra packages

* add test only workflow
2021-04-02 02:57:48 +02:00
dgtlmoon
fb723c264d Bumping version to 0.28 2021-04-01 14:43:46 +02:00
dgtlmoon
3ad722d63c Docker push amd64 rpi etc (#28)
* trying multiarch docker hub push on build, similar to https://github.com/dgtlmoon/changedetection.io/pull/25/files

* Adding image builder

* Include our dev branch

* Tweak buildx

* dont use alias

* Finally found the right info at https://docs.docker.com/ci-cd/github-actions/

* Updated from https://github.com/razorpay/docker-build-push-action

* Teaks to build

* Tweaks

* Minor tweaks to version

* tweaks

* Remove version

* Remove old workflow

* syntax cleanup
2021-04-01 14:10:23 +02:00
dgtlmoon
9c16695932 Open [diff] links into their own window 2021-04-01 12:57:47 +02:00
dgtlmoon
35fc76c02c Fix auto jump on viewing the diff 2021-04-01 12:53:19 +02:00
dgtlmoon
934d8c6211 Re #30 - Delete history watch snapshots (#31)
Re #30 - Delete history watch snapshots  Scrub - Optionally delete history snapshots newer than timestamp
2021-04-01 12:01:42 +02:00
dgtlmoon
294256d5c3 Merge branch 'master' of github.com:dgtlmoon/changedetection.io 2021-03-29 18:38:20 +02:00
dgtlmoon
b7efdfd52c Slow down the DB write interval and catch the case that it changed during write 2021-03-29 18:37:03 +02:00
dgtlmoon
6a78b5ad1d Immediately 'jump' to the change 2021-03-29 18:36:50 +02:00
dgtlmoon
98f3e61314 Tweak to hover pause icon 2021-03-29 18:36:31 +02:00
dgtlmoon
e322c44d3e Stop runtime error on dict changing during write/init at start (#27)
* Lock datastore when writing

* Racecase fix

* Tweaks to locking (add delay)
2021-03-29 18:23:13 +02:00
dgtlmoon
7b226e1d54 Merge pull request #26 from dgtlmoon/pause
Re #22 - ability to pause
2021-03-29 16:14:16 +02:00
dgtlmoon
35e597a4c8 Re #22 - ability to pause 2021-03-29 16:11:22 +02:00
dgtlmoon
0a1a8340c2 Re #23 - always check value of interval time, not just on start 2021-03-29 15:04:15 +02:00
dgtlmoon
8b5cd40593 Update README.md 2021-03-26 11:07:06 +01:00
dgtlmoon
7d978a6e65 Merge pull request #19 from dgtlmoon/markdown-tweak
Use absolute image links so the screenshots work from docker hub
2021-03-04 09:59:37 +01:00
dgtlmoon
fdab52d400 Use absolute image links so the screenshots work from docker hub 2021-03-04 09:58:58 +01:00
dgtlmoon
782795310f Update README.md
Removing text that is tricky to maintain and confusing
2021-03-03 09:01:14 +01:00
Leigh Morresi
2280e6d497 Updating screenshot 2021-03-01 16:12:30 +01:00
Leigh Morresi
822f3e6d20 Reuse the GUID if we have one 2021-03-01 16:01:53 +01:00
241 changed files with 21666 additions and 3283 deletions

2
.dockerignore Normal file
View File

@@ -0,0 +1,2 @@
.git
.github

9
.github/FUNDING.yml vendored
View File

@@ -1,12 +1,3 @@
# These are supported funding model platforms
github: dgtlmoon
patreon: # Replace with a single Patreon username
open_collective: # Replace with a single Open Collective username
ko_fi: # Replace with a single Ko-fi username
tidelift: # Replace with a single Tidelift platform-name/package-name e.g., npm/babel
community_bridge: # Replace with a single Community Bridge project-name e.g., cloud-foundry
liberapay: # Replace with a single Liberapay username
issuehunt: # Replace with a single IssueHunt username
otechie: # Replace with a single Otechie username
custom: # Replace with up to 4 custom sponsorship URLs e.g., ['link1', 'link2']

58
.github/ISSUE_TEMPLATE/bug_report.md vendored Normal file
View File

@@ -0,0 +1,58 @@
---
name: Bug report
about: Create a bug report, if you don't follow this template, your report will be DELETED
title: ''
labels: 'triage'
assignees: 'dgtlmoon'
---
**DO NOT USE THIS FORM TO REPORT THAT A PARTICULAR WEBSITE IS NOT SCRAPING/WATCHING AS EXPECTED**
This form is only for direct bugs and feature requests todo directly with the software.
Please report watched websites (full URL and _any_ settings) that do not work with changedetection.io as expected [**IN THE DISCUSSION FORUMS**](https://github.com/dgtlmoon/changedetection.io/discussions) or your report will be deleted
CONSIDER TAKING OUT A SUBSCRIPTION FOR A SMALL PRICE PER MONTH, YOU GET THE BENEFIT OF USING OUR PAID PROXIES AND FURTHERING THE DEVELOPMENT OF CHANGEDETECTION.IO
THANK YOU
**Describe the bug**
A clear and concise description of what the bug is.
**Version**
*Exact version* in the top right area: 0....
**To Reproduce**
Steps to reproduce the behavior:
1. Go to '...'
2. Click on '....'
3. Scroll down to '....'
4. See error
! ALWAYS INCLUDE AN EXAMPLE URL WHERE IT IS POSSIBLE TO RE-CREATE THE ISSUE - USE THE 'SHARE WATCH' FEATURE AND PASTE IN THE SHARE-LINK!
**Expected behavior**
A clear and concise description of what you expected to happen.
**Screenshots**
If applicable, add screenshots to help explain your problem.
**Desktop (please complete the following information):**
- OS: [e.g. iOS]
- Browser [e.g. chrome, safari]
- Version [e.g. 22]
**Smartphone (please complete the following information):**
- Device: [e.g. iPhone6]
- OS: [e.g. iOS8.1]
- Browser [e.g. stock browser, safari]
- Version [e.g. 22]
**Additional context**
Add any other context about the problem here.

View File

@@ -0,0 +1,23 @@
---
name: Feature request
about: Suggest an idea for this project
title: '[feature]'
labels: 'enhancement'
assignees: ''
---
**Version and OS**
For example, 0.123 on linux/docker
**Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe the use-case and give concrete real-world examples**
Attach any HTML/JSON, give links to sites, screenshots etc, we are not mind readers
**Additional context**
Add any other context or screenshots about the feature request here.

31
.github/test/Dockerfile-alpine vendored Normal file
View File

@@ -0,0 +1,31 @@
# Taken from https://github.com/linuxserver/docker-changedetection.io/blob/main/Dockerfile
# Test that we can still build on Alpine (musl modified libc https://musl.libc.org/)
# Some packages wont install via pypi because they dont have a wheel available under this architecture.
FROM ghcr.io/linuxserver/baseimage-alpine:3.16
ENV PYTHONUNBUFFERED=1
COPY requirements.txt /requirements.txt
RUN \
apk add --update --no-cache --virtual=build-dependencies \
cargo \
g++ \
gcc \
libc-dev \
libffi-dev \
libxslt-dev \
make \
openssl-dev \
py3-wheel \
python3-dev \
zlib-dev && \
apk add --update --no-cache \
libxslt \
python3 \
py3-pip && \
echo "**** pip3 install test of changedetection.io ****" && \
pip3 install -U pip wheel setuptools && \
pip3 install -U --no-cache-dir --find-links https://wheel-index.linuxserver.io/alpine-3.16/ -r /requirements.txt && \
apk del --purge \
build-dependencies

62
.github/workflows/codeql-analysis.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
# For most projects, this workflow file will not need changing; you simply need
# to commit it to your repository.
#
# You may wish to alter this file to override the set of languages analyzed,
# or to provide custom queries or build logic.
#
# ******** NOTE ********
# We have attempted to detect the languages in your repository. Please check
# the `language` matrix defined below to confirm you have the correct set of
# supported CodeQL languages.
#
name: "CodeQL"
on:
schedule:
- cron: '27 9 * * 4'
jobs:
analyze:
name: Analyze
runs-on: ubuntu-latest
strategy:
fail-fast: false
matrix:
language: [ 'javascript', 'python' ]
# CodeQL supports [ 'cpp', 'csharp', 'go', 'java', 'javascript', 'python' ]
# Learn more:
# https://docs.github.com/en/free-pro-team@latest/github/finding-security-vulnerabilities-and-errors-in-your-code/configuring-code-scanning#changing-the-languages-that-are-analyzed
steps:
- name: Checkout repository
uses: actions/checkout@v2
# Initializes the CodeQL tools for scanning.
- name: Initialize CodeQL
uses: github/codeql-action/init@v1
with:
languages: ${{ matrix.language }}
# If you wish to specify custom queries, you can do so here or in a config file.
# By default, queries listed here will override any specified in a config file.
# Prefix the list here with "+" to use these queries and those in the config file.
# queries: ./path/to/local/query, your-org/your-repo/queries@main
# Autobuild attempts to build any compiled languages (C/C++, C#, or Java).
# If this step fails, then you should remove it and run the build manually (see below)
- name: Autobuild
uses: github/codeql-action/autobuild@v1
# Command-line programs to run using the OS shell.
# 📚 https://git.io/JvXDl
# ✏️ If the Autobuild fails above, remove it and uncomment the following three lines
# and modify them (or add more) to build your code if your project
# uses a compiled language
#- run: |
# make bootstrap
# make release
- name: Perform CodeQL Analysis
uses: github/codeql-action/analyze@v1

131
.github/workflows/containers.yml vendored Normal file
View File

@@ -0,0 +1,131 @@
name: Build and push containers
on:
# Automatically triggered by a testing workflow passing, but this is only checked when it lands in the `master`/default branch
# workflow_run:
# workflows: ["ChangeDetection.io Test"]
# branches: [master]
# tags: ['0.*']
# types: [completed]
# Or a new tagged release
release:
types: [published, edited]
push:
branches:
- master
jobs:
metadata:
runs-on: ubuntu-latest
steps:
- name: Show metadata
run: |
echo SHA ${{ github.sha }}
echo github.ref: ${{ github.ref }}
echo github_ref: $GITHUB_REF
echo Event name: ${{ github.event_name }}
echo Ref ${{ github.ref }}
echo c: ${{ github.event.workflow_run.conclusion }}
echo r: ${{ github.event.workflow_run }}
echo tname: "${{ github.event.release.tag_name }}"
echo headbranch: -${{ github.event.workflow_run.head_branch }}-
set
build-push-containers:
runs-on: ubuntu-latest
# If the testing workflow has a success, then we build to :latest
# Or if we are in a tagged release scenario.
if: ${{ github.event.workflow_run.conclusion == 'success' }} || ${{ github.event.release.tag_name }} != ''
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Create release metadata
run: |
# COPY'ed by Dockerfile into changedetectionio/ of the image, then read by the server in store.py
echo ${{ github.sha }} > changedetectionio/source.txt
echo ${{ github.ref }} > changedetectionio/tag.txt
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Login to GitHub Container Registry
uses: docker/login-action@v1
with:
registry: ghcr.io
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
- name: Login to Docker Hub Container Registry
uses: docker/login-action@v1
with:
username: ${{ secrets.DOCKER_HUB_USERNAME }}
password: ${{ secrets.DOCKER_HUB_ACCESS_TOKEN }}
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
# master branch -> :dev container tag
- name: Build and push :dev
id: docker_build
if: ${{ github.ref }} == "refs/heads/master"
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:dev,ghcr.io/${{ github.repository }}:dev
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
provenance: false
# A new tagged release is required, which builds :tag and :latest
- name: Build and push :tag
id: docker_build_tag_release
if: github.event_name == 'release' && startsWith(github.event.release.tag_name, '0.')
uses: docker/build-push-action@v2
with:
context: ./
file: ./Dockerfile
push: true
tags: |
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:${{ github.event.release.tag_name }}
ghcr.io/dgtlmoon/changedetection.io:${{ github.event.release.tag_name }}
${{ secrets.DOCKER_HUB_USERNAME }}/changedetection.io:latest
ghcr.io/dgtlmoon/changedetection.io:latest
platforms: linux/amd64,linux/arm64,linux/arm/v6,linux/arm/v7
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache
provenance: false
- name: Image digest
run: echo step SHA ${{ steps.vars.outputs.sha_short }} tag ${{steps.vars.outputs.tag}} branch ${{steps.vars.outputs.branch}} digest ${{ steps.docker_build.outputs.digest }}
- name: Cache Docker layers
uses: actions/cache@v2
with:
path: /tmp/.buildx-cache
key: ${{ runner.os }}-buildx-${{ github.sha }}
restore-keys: |
${{ runner.os }}-buildx-

38
.github/workflows/pypi.yml vendored Normal file
View File

@@ -0,0 +1,38 @@
name: PyPi Test and Push tagged release
# Triggers the workflow on push or pull request events
on:
workflow_run:
workflows: ["ChangeDetection.io Test"]
tags: '*.*'
types: [completed]
jobs:
test-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Test that pip builds without error
run: |
pip3 --version
python3 -m pip install wheel
python3 setup.py bdist_wheel
python3 -m pip install dist/changedetection.io-*-none-any.whl --force
changedetection.io -d /tmp -p 10000 &
sleep 3
curl http://127.0.0.1:10000/static/styles/pure-min.css >/dev/null
killall -9 changedetection.io
# https://github.com/docker/build-push-action/blob/master/docs/advanced/test-before-push.md ?
# https://github.com/docker/buildx/issues/59 ? Needs to be one platform?
# https://github.com/docker/buildx/issues/495#issuecomment-918925854
#if: ${{ github.event_name == 'release'}}

View File

@@ -1,33 +0,0 @@
# This workflow will install Python dependencies, run tests and lint with a single version of Python
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-python-with-github-actions
name: changedetection.io
on: [push, pull_request]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install flake8 pytest
if [ -f requirements.txt ]; then pip install -r requirements.txt; fi
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Test with pytest
run: |
cd backend; pytest

View File

@@ -0,0 +1,68 @@
name: ChangeDetection.io Container Build Test
# Triggers the workflow on push or pull request events
# This line doesnt work, even tho it is the documented one
#on: [push, pull_request]
on:
push:
paths:
- requirements.txt
- Dockerfile
- .github/workflows/*
pull_request:
paths:
- requirements.txt
- Dockerfile
- .github/workflows/*
# Changes to requirements.txt packages and Dockerfile may or may not always be compatible with arm etc, so worth testing
# @todo: some kind of path filter for requirements.txt and Dockerfile
jobs:
test-container-build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- name: Set up Python 3.9
uses: actions/setup-python@v2
with:
python-version: 3.9
# Just test that the build works, some libraries won't compile on ARM/rPi etc
- name: Set up QEMU
uses: docker/setup-qemu-action@v1
with:
image: tonistiigi/binfmt:latest
platforms: all
- name: Set up Docker Buildx
id: buildx
uses: docker/setup-buildx-action@v1
with:
install: true
version: latest
driver-opts: image=moby/buildkit:master
# https://github.com/dgtlmoon/changedetection.io/pull/1067
# Check we can still build under alpine/musl
- name: Test that the docker containers can build (musl via alpine check)
id: docker_build_musl
uses: docker/build-push-action@v2
with:
context: ./
file: ./.github/test/Dockerfile-alpine
platforms: linux/amd64,linux/arm64
- name: Test that the docker containers can build
id: docker_build
uses: docker/build-push-action@v2
# https://github.com/docker/build-push-action#customizing
with:
context: ./
file: ./Dockerfile
platforms: linux/arm/v7,linux/arm/v6,linux/amd64,linux/arm64,
cache-from: type=local,src=/tmp/.buildx-cache
cache-to: type=local,dest=/tmp/.buildx-cache

77
.github/workflows/test-only.yml vendored Normal file
View File

@@ -0,0 +1,77 @@
name: ChangeDetection.io App Test
# Triggers the workflow on push or pull request events
on: [push, pull_request]
jobs:
test-application:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
# Mainly just for link/flake8
- name: Set up Python 3.10
uses: actions/setup-python@v2
with:
python-version: '3.10'
- name: Lint with flake8
run: |
pip3 install flake8
# stop the build if there are Python syntax errors or undefined names
flake8 . --count --select=E9,F63,F7,F82 --show-source --statistics
# exit-zero treats all errors as warnings. The GitHub editor is 127 chars wide
flake8 . --count --exit-zero --max-complexity=10 --max-line-length=127 --statistics
- name: Spin up ancillary testable services
run: |
docker network create changedet-network
# Selenium+browserless
docker run --network changedet-network -d --hostname selenium -p 4444:4444 --rm --shm-size="2g" selenium/standalone-chrome-debug:3.141.59
docker run --network changedet-network -d --hostname browserless -e "DEFAULT_LAUNCH_ARGS=[\"--window-size=1920,1080\"]" --rm -p 3000:3000 --shm-size="2g" browserless/chrome:1.53-chrome-stable
- name: Build changedetection.io container for testing
run: |
# Build a changedetection.io container and start testing inside
docker build . -t test-changedetectionio
- name: Test built container with pytest
run: |
# Unit tests
docker run test-changedetectionio bash -c 'python3 -m unittest changedetectionio.tests.unit.test_notification_diff'
# All tests
docker run --network changedet-network test-changedetectionio bash -c 'cd changedetectionio && ./run_basic_tests.sh'
- name: Test built container selenium+browserless/playwright
run: |
# Selenium fetch
docker run -e "WEBDRIVER_URL=http://selenium:4444/wd/hub" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py'
# Playwright/Browserless fetch
docker run -e "PLAYWRIGHT_DRIVER_URL=ws://browserless:3000" --network changedet-network test-changedetectionio bash -c 'cd changedetectionio;pytest tests/fetchers/test_content.py && pytest tests/test_errorhandling.py && pytest tests/visualselector/test_fetch_data.py'
- name: Test proxy interaction
run: |
cd changedetectionio
./run_proxy_tests.sh
cd ..
- name: Test changedetection.io container starts+runs basically without error
run: |
docker run -p 5556:5000 -d test-changedetectionio
sleep 3
# Should return 0 (no error) when grep finds it
curl -s http://localhost:5556 |grep -q checkbox-uuid
# and IPv6
curl -s -g -6 "http://[::1]:5556"|grep -q checkbox-uuid
#export WEBDRIVER_URL=http://localhost:4444/wd/hub
#pytest tests/fetchers/test_content.py
#pytest tests/test_errorhandling.py

7
.gitignore vendored
View File

@@ -5,3 +5,10 @@ datastore/url-watches.json
datastore/*
__pycache__
.pytest_cache
build
dist
venv
test-datastore/*
test-datastore
*.egg-info*
.vscode/settings.json

9
CONTRIBUTING.md Normal file
View File

@@ -0,0 +1,9 @@
Contributing is always welcome!
I am no professional flask developer, if you know a better way that something can be done, please let me know!
Otherwise, it's always best to PR into the `dev` branch.
Please be sure that all new functionality has a matching test!
Use `pytest` to validate/test, you can run the existing tests as `pytest tests/test_notification.py` for example

View File

@@ -1,28 +1,65 @@
FROM python:3.8-slim
COPY requirements.txt /tmp/requirements.txt
RUN pip3 install -r /tmp/requirements.txt
# pip dependencies install stage
FROM python:3.10-slim as builder
# See `cryptography` pin comment in requirements.txt
ARG CRYPTOGRAPHY_DONT_BUILD_RUST=1
RUN apt-get update && apt-get install -y --no-install-recommends \
g++ \
gcc \
libc-dev \
libffi-dev \
libjpeg-dev \
libssl-dev \
libxslt-dev \
make \
zlib1g-dev
RUN mkdir /install
WORKDIR /install
COPY requirements.txt /requirements.txt
RUN pip install --target=/dependencies -r /requirements.txt
# Playwright is an alternative to Selenium
# Excluded this package from requirements.txt to prevent arm/v6 and arm/v7 builds from failing
# https://github.com/dgtlmoon/changedetection.io/pull/1067 also musl/alpine (not supported)
RUN pip install --target=/dependencies playwright~=1.27.1 \
|| echo "WARN: Failed to install Playwright. The application can still run, but the Playwright option will be disabled."
# Final image stage
FROM python:3.10-slim
RUN apt-get update && apt-get install -y --no-install-recommends \
libssl1.1 \
libxslt1.1 \
# For pdftohtml
poppler-utils \
zlib1g \
&& apt-get clean && rm -rf /var/lib/apt/lists/*
RUN [ ! -d "/app" ] && mkdir /app
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
RUN [ ! -d "/datastore" ] && mkdir /datastore
# Re #80, sets SECLEVEL=1 in openssl.conf to allow monitoring sites with weak/old cipher suites
RUN sed -i 's/^CipherString = .*/CipherString = DEFAULT@SECLEVEL=1/' /etc/ssl/openssl.cnf
# Copy modules over to the final image and add their dir to PYTHONPATH
COPY --from=builder /dependencies /usr/local
ENV PYTHONPATH=/usr/local
EXPOSE 5000
# The actual flask app
COPY backend /app/backend
COPY changedetectionio /app/changedetectionio
# The eventlet server wrapper
COPY changedetection.py /app/changedetection.py
WORKDIR /app
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
# Attempt to store the triggered commit
ARG SOURCE_COMMIT
ARG SOURCE_BRANCH
RUN echo "commit: $SOURCE_COMMIT branch: $SOURCE_BRANCH" >/source.txt
CMD [ "python", "./changedetection.py" , "-d", "/datastore"]

14
MANIFEST.in Normal file
View File

@@ -0,0 +1,14 @@
recursive-include changedetectionio/api *
recursive-include changedetectionio/blueprint *
recursive-include changedetectionio/model *
recursive-include changedetectionio/res *
recursive-include changedetectionio/static *
recursive-include changedetectionio/templates *
recursive-include changedetectionio/tests *
prune changedetectionio/static/package-lock.json
prune changedetectionio/static/styles/node_modules
prune changedetectionio/static/styles/package-lock.json
include changedetection.py
global-exclude *.pyc
global-exclude node_modules
global-exclude venv

1
Procfile Normal file
View File

@@ -0,0 +1 @@
web: python3 ./changedetection.py -C -d ./datastore -p $PORT

58
README-pip.md Normal file
View File

@@ -0,0 +1,58 @@
## Web Site Change Detection, Monitoring and Notification.
Live your data-life pro-actively, track website content changes and receive notifications via Discord, Email, Slack, Telegram and 70+ more
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start?src=pip)
[**Don't have time? Let us host it for you! try our extremely affordable subscription use our proxies and support!**](https://lemonade.changedetection.io/start)
#### Example use cases
- Products and services have a change in pricing
- _Out of stock notification_ and _Back In stock notification_
- Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list.
- Festivals with changes
- Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites
- University/organisation news from their website
- Detect and monitor changes in JSON API responses
- JSON API monitoring and alerting
- Changes in legal and other documents
- Trigger API calls via notifications when text appears on a website
- Glue together APIs using the JSON filter and JSON notifications
- Create RSS feeds based on changes in web content
- Monitor HTML source code for unexpected changes, strengthen your PCI compliance
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_
#### Key Features
- Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions!
- Target elements with xPath and CSS Selectors, Easily monitor complex JSON with JSONPath or jq
- Switch between fast non-JS and Chrome JS based "fetchers"
- Easily specify how often a site should be checked
- Execute JS before extracting text (Good for logging in, see examples in the UI!)
- Override Request Headers, Specify `POST` or `GET` and other methods
- Use the "Visual Selector" to help target specific elements
```bash
$ pip3 install changedetection.io
```
Specify a target for the *datastore path* with `-d` (required) and a *listening port* with `-p` (defaults to `5000`)
```bash
$ changedetection.io -d /path/to/empty/data/dir -p 5000
```
Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
See https://github.com/dgtlmoon/changedetection.io for more information.

263
README.md
View File

@@ -1,63 +1,254 @@
# changedetection.io
![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/python-app.yml/badge.svg?branch=master)
<a href="https://hub.docker.com/r/dgtlmoon/changedetection.io" target="_blank" title="Change detection docker hub">
<img src="https://img.shields.io/docker/pulls/dgtlmoon/changedetection.io" alt="Docker Pulls"/>
</a>
<a href="https://hub.docker.com/r/dgtlmoon/changedetection.io" target="_blank" title="Change detection docker hub">
<img src="https://img.shields.io/docker/v/dgtlmoon/changedetection.io/0.27" alt="Change detection latest tag version"/>
</a>
## Web Site Change Detection, Monitoring and Notification.
## Self-hosted change monitoring of web pages.
**_Detect website content changes and perform meaningful actions - trigger notifications via Discord, Email, Slack, Telegram, API calls and many more._**
_Know when web pages change! Stay ontop of new information!_
![Self-hosted web page change monitoring application screenshot](screenshot.png?raw=true "Self-hosted web page change monitoring screenshot")
_Live your data-life pro-actively._
#### Example use cases
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot.png" style="max-width:100%;" alt="Self-hosted web page change monitoring" title="Self-hosted web page change monitoring" />](https://lemonade.changedetection.io/start?src=github)
Know when ...
[![Release Version][release-shield]][release-link] [![Docker Pulls][docker-pulls]][docker-link] [![License][license-shield]](LICENSE.md)
- Government department updates (changes are often only on their websites)
- Local government news (changes are often only on their websites)
![changedetection.io](https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master)
[**Don't have time? Let us host it for you! try our $8.99/month subscription - use our proxies and support!**](https://lemonade.changedetection.io/start) , _half the price of other website change monitoring services and comes with unlimited watches & checks!_
- Chrome browser included.
- Super fast, no registration needed setup.
- Get started watching and receiving website change notifications straight away.
### Target specific parts of the webpage using the Visual Selector tool.
Available when connected to a <a href="https://github.com/dgtlmoon/changedetection.io/wiki/Playwright-content-fetcher">playwright content fetcher</a> (included as part of our subscription service)
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/visualselector-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://lemonade.changedetection.io/start?src=github)
### Easily see what changed, examine by word, line, or individual character.
[<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-diff.png" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Self-hosted web page change monitoring context difference " />](https://lemonade.changedetection.io/start?src=github)
### Perform interactive browser steps
Fill in text boxes, click buttons and more, setup your changedetection scenario.
Using the **Browser Steps** configuration, add basic steps before performing change detection, such as logging into websites, adding a product to a cart, accept cookie logins, entering dates and refining searches.
[<img src="docs/browsersteps-anim.gif" style="max-width:100%;" alt="Self-hosted web page change monitoring context difference " title="Website change detection with interactive browser steps, login, cookies etc" />](https://lemonade.changedetection.io/start?src=github)
After **Browser Steps** have been run, then visit the **Visual Selector** tab to refine the content you're interested in.
Requires Playwright to be enabled.
### Example use cases
- Products and services have a change in pricing
- _Out of stock notification_ and _Back In stock notification_
- Monitor and track PDF file changes, know when a PDF file has text changes.
- Governmental department updates (changes are often only on their websites)
- New software releases, security advisories when you're not on their mailing list.
- Festivals with changes
- Realestate listing changes
- Know when your favourite whiskey is on sale, or other special deals are announced before anyone else
- COVID related news from government websites
- University/organisation news from their website
- Detect and monitor changes in JSON API responses
- JSON API monitoring and alerting
- Changes in legal and other documents
- Trigger API calls via notifications when text appears on a website
- Glue together APIs using the JSON filter and JSON notifications
- Create RSS feeds based on changes in web content
- Monitor HTML source code for unexpected changes, strengthen your PCI compliance
- You have a very sensitive list of URLs to watch and you do _not_ want to use the paid alternatives. (Remember, _you_ are the product)
- Get notified when certain keywords appear in Twitter search results
- Proactively search for jobs, get notified when companies update their careers page, search job portals for keywords.
_Need an actual Chrome runner with Javascript support? We support fetching via WebDriver and Playwright!</a>_
**Get monitoring now! super simple, one command!**
#### Key Features
- Lots of trigger filters, such as "Trigger on text", "Remove text by selector", "Ignore text", "Extract text", also using regular-expressions!
- Target elements with xPath and CSS Selectors, Easily monitor complex JSON with JSONPath or jq
- Switch between fast non-JS and Chrome JS based "fetchers"
- Track changes in PDF files (Monitor text changed in the PDF, Also monitor PDF filesize and checksums)
- Easily specify how often a site should be checked
- Execute JS before extracting text (Good for logging in, see examples in the UI!)
- Override Request Headers, Specify `POST` or `GET` and other methods
- Use the "Visual Selector" to help target specific elements
- Configurable [proxy per watch](https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration)
- Send a screenshot with the notification when a change is detected in the web page
We [recommend and use Bright Data](https://brightdata.grsm.io/n0r16zf7eivq) global proxy services, Bright Data will match any first deposit up to $100 using our signup link.
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
## Installation
### Docker
With Docker composer, just clone this repository and..
```bash
docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
$ docker-compose up -d
```
Now visit http://127.0.0.1:5000 , You should now be able to access the UI.
Docker standalone
```bash
$ docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
#### Updating to latest version
`:latest` tag is our latest stable release, `:dev` tag is our bleeding edge `master` branch.
Highly recommended :)
### Windows
See the install instructions at the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Microsoft-Windows
### Python Pip
Check out our pypi page https://pypi.org/project/changedetection.io/
```bash
$ pip3 install changedetection.io
$ changedetection.io -d /path/to/empty/data/dir -p 5000
```
Then visit http://127.0.0.1:5000 , You should now be able to access the UI.
_Now with per-site configurable support for using a fast built in HTTP fetcher or use a Chrome based fetcher for monitoring of JavaScript websites!_
## Updating changedetection.io
### Docker
```
docker pull dgtlmoon/changedetection.io
docker kill $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker rm $(docker ps -a|grep changedetection.io|awk '{print $1}')
docker kill $(docker ps -a -f name=changedetection.io -q)
docker rm $(docker ps -a -f name=changedetection.io -q)
docker run -d --restart always -p "127.0.0.1:5000:5000" -v datastore-volume:/datastore --name changedetection.io dgtlmoon/changedetection.io
```
### Screenshots
Examining differences in content.
### docker-compose
![Self-hosted web page change monitoring context difference screenshot](screenshot-diff.png?raw=true "Self-hosted web page change monitoring context difference screenshot")
```bash
docker-compose pull && docker-compose up -d
```
### Future plans
See the wiki for more information https://github.com/dgtlmoon/changedetection.io/wiki
- Greater configuration of check interval times, page request headers.
- ~~General options for timeout, default headers~~
- On change detection, callout to another API (handy for notices/issue trackers)
- ~~Explore the differences that were detected~~
- Add more options to explore versions of differences
- Use a graphic/rendered page difference instead of text (see the experimental `selenium-screenshot-diff` branch)
## Filters
XPath, JSONPath, jq, and CSS support comes baked in! You can be as specific as you need, use XPath exported from various XPath element query creation tools.
(We support LXML `re:test`, `re:math` and `re:replace`.)
## Notifications
ChangeDetection.io supports a massive amount of notifications (including email, office365, custom APIs, etc) when a web-page has a change detected thanks to the <a href="https://github.com/caronc/apprise">apprise</a> library.
Simply set one or more notification URL's in the _[edit]_ tab of that watch.
Just some examples
discord://webhook_id/webhook_token
flock://app_token/g:channel_id
gitter://token/room
gchat://workspace/key/token
msteams://TokenA/TokenB/TokenC/
o365://TenantID:AccountEmail/ClientID/ClientSecret/TargetEmail
rocket://user:password@hostname/#Channel
mailto://user:pass@example.com?to=receivingAddress@example.com
json://someserver.com/custom-api
syslog://
Please :star: star :star: this project and help it grow! https://github.com/dgtlmoon/changedetection.io/
<a href="https://github.com/caronc/apprise#popular-notification-services">And everything else in this list!</a>
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/screenshot-notifications.png" style="max-width:100%;" alt="Self-hosted web page change monitoring notifications" title="Self-hosted web page change monitoring notifications" />
Now you can also customise your notification content and use <a target="_new" href="https://jinja.palletsprojects.com/en/3.0.x/templates/">Jinja2 templating</a> for their title and body!
## JSON API Monitoring
Detect changes and monitor data in JSON API's by using either JSONPath or jq to filter, parse, and restructure JSON as needed.
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-filter-field-example.png)
This will re-parse the JSON and apply formatting to the text, making it super easy to monitor and detect changes in JSON API results
![image](https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/json-diff-example.png)
### JSONPath or jq?
For more complex parsing, filtering, and modifying of JSON data, jq is recommended due to the built-in operators and functions. Refer to the [documentation](https://stedolan.github.io/jq/manual/) for more specifc information on jq.
One big advantage of `jq` is that you can use logic in your JSON filter, such as filters to only show items that have a value greater than/less than etc.
See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/JSON-Selector-Filter-help for more information and examples
### Parse JSON embedded in HTML!
When you enable a `json:` or `jq:` filter, you can even automatically extract and parse embedded JSON inside a HTML page! Amazingly handy for sites that build content based on JSON, such as many e-commerce websites.
```
<html>
...
<script type="application/ld+json">
{
"@context":"http://schema.org/",
"@type":"Product",
"offers":{
"@type":"Offer",
"availability":"http://schema.org/InStock",
"price":"3949.99",
"priceCurrency":"USD",
"url":"https://www.newegg.com/p/3D5-000D-001T1"
},
"description":"Cobratype King Cobra Hero Desktop Gaming PC",
"name":"Cobratype King Cobra Hero Desktop Gaming PC",
"sku":"3D5-000D-001T1",
"itemCondition":"NewCondition"
}
</script>
```
`json:$..price` or `jq:..price` would give `3949.99`, or you can extract the whole structure (use a JSONpath test website to validate with)
The application also supports notifying you that it can follow this information automatically
## Proxy Configuration
See the wiki https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration , we also support using [BrightData proxy services where possible]( https://github.com/dgtlmoon/changedetection.io/wiki/Proxy-configuration#brightdata-proxy-support)
## Raspberry Pi support?
Raspberry Pi and linux/arm/v6 linux/arm/v7 arm64 devices are supported! See the wiki for [details](https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver)
## API Support
Supports managing the website watch list [via our API](https://changedetection.io/docs/api_v1/index.html)
## Support us
Do you use changedetection.io to make money? does it save you time or money? Does it make your life easier? less stressful? Remember, we write this software when we should be doing actual paid work, we have to buy food and pay rent just like you.
Firstly, consider taking out a [change detection monthly subscription - unlimited checks and watches](https://lemonade.changedetection.io/start) , even if you don't use it, you still get the warm fuzzy feeling of helping out the project. (And who knows, you might just use it!)
Or directly donate an amount PayPal [![Donate](https://img.shields.io/badge/Donate-PayPal-green.svg)](https://www.paypal.com/donate/?hosted_button_id=7CP6HR9ZCNDYJ)
Or BTC `1PLFN327GyUarpJd7nVe7Reqg9qHx5frNn`
<img src="https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/docs/btc-support.png" style="max-width:50%;" alt="Support us!" />
## Commercial Support
I offer commercial support, this software is depended on by network security, aerospace , data-science and data-journalist professionals just to name a few, please reach out at dgtlmoon@gmail.com for any enquiries, I am more than glad to work with your organisation to further the possibilities of what can be done with changedetection.io
[release-shield]: https://img.shields.io:/github/v/release/dgtlmoon/changedetection.io?style=for-the-badge
[docker-pulls]: https://img.shields.io/docker/pulls/dgtlmoon/changedetection.io?style=for-the-badge
[test-shield]: https://github.com/dgtlmoon/changedetection.io/actions/workflows/test-only.yml/badge.svg?branch=master
[license-shield]: https://img.shields.io/github/license/dgtlmoon/changedetection.io.svg?style=for-the-badge
[release-link]: https://github.com/dgtlmoon/changedetection.io/releases
[docker-link]: https://hub.docker.com/r/dgtlmoon/changedetection.io

21
app.json Normal file
View File

@@ -0,0 +1,21 @@
{
"name": "ChangeDetection.io",
"description": "The best and simplest self-hosted open source website change detection monitoring and notification service.",
"keywords": [
"changedetection",
"website monitoring"
],
"repository": "https://github.com/dgtlmoon/changedetection.io",
"success_url": "/",
"scripts": {
},
"env": {
},
"formation": {
"web": {
"quantity": 1,
"size": "free"
}
},
"image": "heroku/python"
}

View File

@@ -1 +0,0 @@
Note: run `pytest` from this directory.

View File

@@ -1,608 +0,0 @@
#!/usr/bin/python3
# @todo logging
# @todo extra options for url like , verify=False etc.
# @todo enable https://urllib3.readthedocs.io/en/latest/user-guide.html#ssl as option?
# @todo option for interval day/6 hour/etc
# @todo on change detected, config for calling some API
# @todo make tables responsive!
# @todo fetch title into json
# https://distill.io/features
# proxy per check
# - flask_cors, itsdangerous,MarkupSafe
import time
import os
import timeago
import threading
from threading import Event
import queue
from flask import Flask, render_template, request, send_file, send_from_directory, abort, redirect, url_for
from feedgen.feed import FeedGenerator
from flask import make_response
import datetime
import pytz
datastore = None
# Local
running_update_threads = []
ticker_thread = None
messages = []
extra_stylesheets = []
update_q = queue.Queue()
app = Flask(__name__, static_url_path="/var/www/change-detection/backen/static")
# Stop browser caching of assets
app.config['SEND_FILE_MAX_AGE_DEFAULT'] = 0
app.config.exit = Event()
app.config['NEW_VERSION_AVAILABLE'] = False
# Disables caching of the templates
app.config['TEMPLATES_AUTO_RELOAD'] = True
# We use the whole watch object from the store/JSON so we can see if there's some related status in terms of a thread
# running or something similar.
@app.template_filter('format_last_checked_time')
def _jinja2_filter_datetime(watch_obj, format="%Y-%m-%d %H:%M:%S"):
# Worker thread tells us which UUID it is currently processing.
for t in running_update_threads:
if t.current_uuid == watch_obj['uuid']:
return "Checking now.."
if watch_obj['last_checked'] == 0:
return 'Not yet'
return timeago.format(int(watch_obj['last_checked']), time.time())
# @app.context_processor
# def timeago():
# def _timeago(lower_time, now):
# return timeago.format(lower_time, now)
# return dict(timeago=_timeago)
@app.template_filter('format_timestamp_timeago')
def _jinja2_filter_datetimestamp(timestamp, format="%Y-%m-%d %H:%M:%S"):
return timeago.format(timestamp, time.time())
# return timeago.format(timestamp, time.time())
# return datetime.datetime.utcfromtimestamp(timestamp).strftime(format)
def changedetection_app(config=None, datastore_o=None):
global datastore
datastore = datastore_o
app.config.update(dict(DEBUG=True))
app.config.update(config or {})
# Setup cors headers to allow all domains
# https://flask-cors.readthedocs.io/en/latest/
# CORS(app)
# https://github.com/pallets/flask/blob/93dd1709d05a1cf0e886df6223377bdab3b077fb/examples/tutorial/flaskr/__init__.py#L39
# You can divide up the stuff like this
@app.route("/", methods=['GET'])
def index():
global messages
limit_tag = request.args.get('tag')
# Sort by last_changed and add the uuid which is usually the key..
sorted_watches = []
for uuid, watch in datastore.data['watching'].items():
if limit_tag != None:
# Support for comma separated list of tags.
for tag_in_watch in watch['tag'].split(','):
tag_in_watch = tag_in_watch.strip()
if tag_in_watch == limit_tag:
watch['uuid'] = uuid
sorted_watches.append(watch)
else:
watch['uuid'] = uuid
sorted_watches.append(watch)
sorted_watches.sort(key=lambda x: x['last_changed'], reverse=True)
existing_tags = datastore.get_all_tags()
rss = request.args.get('rss')
if rss:
fg = FeedGenerator()
fg.title('changedetection.io')
fg.description('Feed description')
fg.link(href='https://changedetection.io')
for watch in sorted_watches:
if not watch['viewed']:
fe = fg.add_entry()
fe.title(watch['url'])
fe.link(href=watch['url'])
fe.description(watch['url'])
fe.guid(watch['uuid'], permalink=False)
dt = datetime.datetime.fromtimestamp(int(watch['newest_history_key']))
dt = dt.replace(tzinfo=pytz.UTC)
fe.pubDate(dt)
response = make_response(fg.rss_str())
response.headers.set('Content-Type', 'application/rss+xml')
return response
else:
output = render_template("watch-overview.html",
watches=sorted_watches,
messages=messages,
tags=existing_tags,
active_tag=limit_tag,
has_unviewed=datastore.data['has_unviewed'])
# Show messages but once.
messages = []
return output
@app.route("/scrub", methods=['GET', 'POST'])
def scrub_page():
from pathlib import Path
global messages
if request.method == 'POST':
confirmtext = request.form.get('confirmtext')
if confirmtext == 'scrub':
for txt_file_path in Path(app.config['datastore_path']).rglob('*.txt'):
os.unlink(txt_file_path)
for uuid, watch in datastore.data['watching'].items():
watch['last_checked'] = 0
watch['last_changed'] = 0
watch['previous_md5'] = None
watch['history'] = {}
datastore.needs_write = True
messages.append({'class': 'ok', 'message': 'Cleaned all version history.'})
else:
messages.append({'class': 'error', 'message': 'Wrong confirm text.'})
return redirect(url_for('index'))
return render_template("scrub.html")
# If they edited an existing watch, we need to know to reset the current/previous md5 to include
# the excluded text.
def get_current_checksum_include_ignore_text(uuid):
import hashlib
from backend import fetch_site_status
# Get the most recent one
newest_history_key = datastore.get_val(uuid, 'newest_history_key')
# 0 means that theres only one, so that there should be no 'unviewed' history availabe
if newest_history_key == 0:
newest_history_key = list(datastore.data['watching'][uuid]['history'].keys())[0]
if newest_history_key:
with open(datastore.data['watching'][uuid]['history'][newest_history_key],
encoding='utf-8') as file:
raw_content = file.read()
handler = fetch_site_status.perform_site_check(datastore=datastore)
stripped_content = handler.strip_ignore_text(raw_content,
datastore.data['watching'][uuid]['ignore_text'])
checksum = hashlib.md5(stripped_content).hexdigest()
return checksum
return datastore.data['watching'][uuid]['previous_md5']
@app.route("/edit/<string:uuid>", methods=['GET', 'POST'])
def edit_page(uuid):
global messages
import validators
# More for testing, possible to return the first/only
if uuid == 'first':
uuid = list(datastore.data['watching'].keys()).pop()
if request.method == 'POST':
url = request.form.get('url').strip()
tag = request.form.get('tag').strip()
# Extra headers
form_headers = request.form.get('headers').strip().split("\n")
extra_headers = {}
if form_headers:
for header in form_headers:
if len(header):
parts = header.split(':', 1)
if len(parts) == 2:
extra_headers.update({parts[0].strip(): parts[1].strip()})
update_obj = {'url': url,
'tag': tag,
'headers': extra_headers
}
# Ignore text
form_ignore_text = request.form.get('ignore-text').strip()
ignore_text = []
if len(form_ignore_text):
for text in form_ignore_text.split("\n"):
text = text.strip()
if len(text):
ignore_text.append(text)
datastore.data['watching'][uuid]['ignore_text'] = ignore_text
# Reset the previous_md5 so we process a new snapshot including stripping ignore text.
if len(datastore.data['watching'][uuid]['history']):
update_obj['previous_md5'] = get_current_checksum_include_ignore_text(uuid=uuid)
validators.url(url) # @todo switch to prop/attr/observer
datastore.data['watching'][uuid].update(update_obj)
datastore.needs_write = True
messages.append({'class': 'ok', 'message': 'Updated watch.'})
return redirect(url_for('index'))
else:
output = render_template("edit.html", uuid=uuid, watch=datastore.data['watching'][uuid], messages=messages)
return output
@app.route("/settings", methods=['GET', "POST"])
def settings_page():
global messages
if request.method == 'POST':
try:
minutes = int(request.values.get('minutes').strip())
except ValueError:
messages.append({'class': 'error', 'message': "Invalid value given, use an integer."})
else:
if minutes >= 5:
datastore.data['settings']['requests']['minutes_between_check'] = minutes
datastore.needs_write = True
messages.append({'class': 'ok', 'message': "Updated"})
else:
messages.append(
{'class': 'error', 'message': "Must be atleast 5 minutes."})
output = render_template("settings.html", messages=messages,
minutes=datastore.data['settings']['requests']['minutes_between_check'])
messages = []
return output
@app.route("/import", methods=['GET', "POST"])
def import_page():
import validators
global messages
remaining_urls = []
good = 0
if request.method == 'POST':
urls = request.values.get('urls').split("\n")
for url in urls:
url = url.strip()
if len(url) and validators.url(url):
new_uuid = datastore.add_watch(url=url.strip(), tag="")
# Straight into the queue.
update_q.put(new_uuid)
good += 1
else:
if len(url):
remaining_urls.append(url)
messages.append({'class': 'ok', 'message': "{} Imported, {} Skipped.".format(good, len(remaining_urls))})
if len(remaining_urls) == 0:
# Looking good, redirect to index.
return redirect(url_for('index'))
# Could be some remaining, or we could be on GET
output = render_template("import.html",
messages=messages,
remaining="\n".join(remaining_urls)
)
messages = []
return output
# Clear all statuses, so we do not see the 'unviewed' class
@app.route("/api/mark-all-viewed", methods=['GET'])
def mark_all_viewed():
# Save the current newest history as the most recently viewed
for watch_uuid, watch in datastore.data['watching'].items():
datastore.set_last_viewed(watch_uuid, watch['newest_history_key'])
messages.append({'class': 'ok', 'message': "Cleared all statuses."})
return redirect(url_for('index'))
@app.route("/diff/<string:uuid>", methods=['GET'])
def diff_history_page(uuid):
global messages
# More for testing, possible to return the first/only
if uuid == 'first':
uuid = list(datastore.data['watching'].keys()).pop()
extra_stylesheets = ['/static/css/diff.css']
try:
watch = datastore.data['watching'][uuid]
except KeyError:
messages.append({'class': 'error', 'message': "No history found for the specified link, bad link?"})
return redirect(url_for('index'))
dates = list(watch['history'].keys())
# Convert to int, sort and back to str again
dates = [int(i) for i in dates]
dates.sort(reverse=True)
dates = [str(i) for i in dates]
if len(dates) < 2:
messages.append(
{'class': 'error', 'message': "Not enough saved change detection snapshots to produce a report."})
return redirect(url_for('index'))
# Save the current newest history as the most recently viewed
datastore.set_last_viewed(uuid, dates[0])
newest_file = watch['history'][dates[0]]
with open(newest_file, 'r') as f:
newest_version_file_contents = f.read()
previous_version = request.args.get('previous_version')
try:
previous_file = watch['history'][previous_version]
except KeyError:
# Not present, use a default value, the second one in the sorted list.
previous_file = watch['history'][dates[1]]
with open(previous_file, 'r') as f:
previous_version_file_contents = f.read()
output = render_template("diff.html", watch_a=watch,
messages=messages,
newest=newest_version_file_contents,
previous=previous_version_file_contents,
extra_stylesheets=extra_stylesheets,
versions=dates[1:],
newest_version_timestamp=dates[0],
current_previous_version=str(previous_version),
current_diff_url=watch['url'])
return output
@app.route("/favicon.ico", methods=['GET'])
def favicon():
return send_from_directory("/app/static/images", filename="favicon.ico")
# We're good but backups are even better!
@app.route("/backup", methods=['GET'])
def get_backup():
import zipfile
from pathlib import Path
# create a ZipFile object
backupname = "changedetection-backup-{}.zip".format(int(time.time()))
# We only care about UUIDS from the current index file
uuids = list(datastore.data['watching'].keys())
with zipfile.ZipFile(os.path.join(app.config['datastore_path'], backupname), 'w',
compression=zipfile.ZIP_DEFLATED,
compresslevel=6) as zipObj:
# Be sure we're written fresh
datastore.sync_to_json()
# Add the index
zipObj.write(os.path.join(app.config['datastore_path'], "url-watches.json"))
# Add any snapshot data we find
for txt_file_path in Path(app.config['datastore_path']).rglob('*.txt'):
parent_p = txt_file_path.parent
if parent_p.name in uuids:
zipObj.write(txt_file_path)
return send_file(os.path.join(app.config['datastore_path'], backupname),
as_attachment=True,
mimetype="application/zip",
attachment_filename=backupname)
@app.route("/static/<string:group>/<string:filename>", methods=['GET'])
def static_content(group, filename):
# These files should be in our subdirectory
full_path = os.path.realpath(__file__)
p = os.path.dirname(full_path)
try:
return send_from_directory("{}/static/{}".format(p, group), filename=filename)
except FileNotFoundError:
abort(404)
@app.route("/api/add", methods=['POST'])
def api_watch_add():
global messages
# @todo add_watch should throw a custom Exception for validation etc
new_uuid = datastore.add_watch(url=request.form.get('url').strip(), tag=request.form.get('tag').strip())
# Straight into the queue.
update_q.put(new_uuid)
messages.append({'class': 'ok', 'message': 'Watch added.'})
return redirect(url_for('index'))
@app.route("/api/delete", methods=['GET'])
def api_delete():
global messages
uuid = request.args.get('uuid')
datastore.delete(uuid)
messages.append({'class': 'ok', 'message': 'Deleted.'})
return redirect(url_for('index'))
@app.route("/api/checknow", methods=['GET'])
def api_watch_checknow():
global messages
tag = request.args.get('tag')
uuid = request.args.get('uuid')
i = 0
running_uuids = []
for t in running_update_threads:
running_uuids.append(t.current_uuid)
# @todo check thread is running and skip
if uuid:
if uuid not in running_uuids:
update_q.put(uuid)
i = 1
elif tag != None:
# Items that have this current tag
for watch_uuid, watch in datastore.data['watching'].items():
if (tag != None and tag in watch['tag']):
i += 1
if watch_uuid not in running_uuids:
update_q.put(watch_uuid)
else:
# No tag, no uuid, add everything.
for watch_uuid, watch in datastore.data['watching'].items():
i += 1
if watch_uuid not in running_uuids:
update_q.put(watch_uuid)
messages.append({'class': 'ok', 'message': "{} watches are rechecking.".format(i)})
return redirect(url_for('index', tag=tag))
# @todo handle ctrl break
ticker_thread = threading.Thread(target=ticker_thread_check_time_launch_checks).start()
# Check for new release version
threading.Thread(target=check_for_new_version).start()
return app
# Check for new version and anonymous stats
def check_for_new_version():
import requests
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
while not app.config.exit.is_set():
try:
r = requests.post("https://changedetection.io/check-ver.php",
data={'version': datastore.data['version_tag'],
'app_guid': datastore.data['app_guid']},
verify=False)
except:
pass
try:
if "new_version" in r.text:
app.config['NEW_VERSION_AVAILABLE'] = True
except:
pass
# Check daily
app.config.exit.wait(86400)
# Requests for checking on the site use a pool of thread Workers managed by a Queue.
class Worker(threading.Thread):
current_uuid = None
def __init__(self, q, *args, **kwargs):
self.q = q
super().__init__(*args, **kwargs)
def run(self):
from backend import fetch_site_status
update_handler = fetch_site_status.perform_site_check(datastore=datastore)
while not app.config.exit.is_set():
try:
uuid = self.q.get(block=False)
except queue.Empty:
pass
else:
self.current_uuid = uuid
if uuid in list(datastore.data['watching'].keys()):
try:
changed_detected, result, contents = update_handler.run(uuid)
except PermissionError as s:
app.logger.error("File permission error updating", uuid, str(s))
else:
if result:
datastore.update_watch(uuid=uuid, update_obj=result)
if changed_detected:
# A change was detected
datastore.save_history_text(uuid=uuid, contents=contents, result_obj=result)
self.current_uuid = None # Done
self.q.task_done()
app.config.exit.wait(1)
# Thread runner to check every minute, look for new watches to feed into the Queue.
def ticker_thread_check_time_launch_checks():
# Spin up Workers.
for _ in range(datastore.data['settings']['requests']['workers']):
new_worker = Worker(update_q)
running_update_threads.append(new_worker)
new_worker.start()
# Every minute check for new UUIDs to follow up on
minutes = datastore.data['settings']['requests']['minutes_between_check']
while not app.config.exit.is_set():
running_uuids = []
for t in running_update_threads:
running_uuids.append(t.current_uuid)
# Look at the dataset, find a stale watch to process
threshold = time.time() - (minutes * 60)
for uuid, watch in datastore.data['watching'].items():
if watch['last_checked'] <= threshold:
if not uuid in running_uuids and uuid not in update_q.queue:
update_q.put(uuid)
# Should be low so we can break this out in testing
app.config.exit.wait(1)

View File

@@ -1,14 +0,0 @@
FROM python:3.8-slim
# https://stackoverflow.com/questions/58701233/docker-logs-erroneously-appears-empty-until-container-stops
ENV PYTHONUNBUFFERED=1
WORKDIR /app
RUN [ ! -d "/datastore" ] && mkdir /datastore
COPY sleep.py /
CMD [ "python", "/sleep.py" ]

View File

@@ -1,7 +0,0 @@
import time
print ("Sleep loop, you should run your script from the console")
while True:
# Wait for 5 seconds
time.sleep(2)

View File

@@ -1,118 +0,0 @@
import time
import requests
import hashlib
from inscriptis import get_text
import urllib3
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
# Some common stuff here that can be moved to a base class
class perform_site_check():
def __init__(self, *args, datastore, **kwargs):
super().__init__(*args, **kwargs)
self.datastore = datastore
def strip_ignore_text(self, content, list_ignore_text):
ignore = []
for k in list_ignore_text:
ignore.append(k.encode('utf8'))
output = []
for line in content.splitlines():
line = line.encode('utf8')
# Always ignore blank lines in this mode. (when this function gets called)
if len(line.strip()):
if not any(skip_text in line for skip_text in ignore):
output.append(line)
return "\n".encode('utf8').join(output)
def run(self, uuid):
timestamp = int(time.time()) # used for storage etc too
stripped_text_from_html = False
changed_detected = False
update_obj = {'previous_md5': self.datastore.data['watching'][uuid]['previous_md5'],
'history': {},
"last_checked": timestamp
}
extra_headers = self.datastore.get_val(uuid, 'headers')
# Tweak the base config with the per-watch ones
request_headers = self.datastore.data['settings']['headers']
request_headers.update(extra_headers)
# https://github.com/psf/requests/issues/4525
# Requests doesnt yet support brotli encoding, so don't put 'br' here, be totally sure that the user cannot
# do this by accident.
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
try:
timeout = self.datastore.data['settings']['requests']['timeout']
except KeyError:
# @todo yeah this should go back to the default value in store.py, but this whole object should abstract off it
timeout = 15
try:
url = self.datastore.get_val(uuid, 'url')
r = requests.get(url,
headers=request_headers,
timeout=timeout,
verify=False)
stripped_text_from_html = get_text(r.text)
# Usually from networkIO/requests level
except (requests.exceptions.ConnectionError, requests.exceptions.ReadTimeout) as e:
update_obj["last_error"] = str(e)
print(str(e))
except requests.exceptions.MissingSchema:
print("Skipping {} due to missing schema/bad url".format(uuid))
# Usually from html2text level
except UnicodeDecodeError as e:
update_obj["last_error"] = str(e)
print(str(e))
# figure out how to deal with this cleaner..
# 'utf-8' codec can't decode byte 0xe9 in position 480: invalid continuation byte
else:
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
update_obj["last_check_status"] = r.status_code
update_obj["last_error"] = False
if not len(r.text):
update_obj["last_error"] = "Empty reply"
# If there's text to skip
# @todo we could abstract out the get_text() to handle this cleaner
if len(self.datastore.data['watching'][uuid]['ignore_text']):
content = self.strip_ignore_text(stripped_text_from_html,
self.datastore.data['watching'][uuid]['ignore_text'])
else:
content = stripped_text_from_html.encode('utf8')
fetched_md5 = hashlib.md5(content).hexdigest()
# could be None or False depending on JSON type
if self.datastore.data['watching'][uuid]['previous_md5'] != fetched_md5:
changed_detected = True
# Don't confuse people by updating as last-changed, when it actually just changed from None..
if self.datastore.get_val(uuid, 'previous_md5'):
update_obj["last_changed"] = timestamp
update_obj["previous_md5"] = fetched_md5
return changed_detected, update_obj, stripped_text_from_html

View File

@@ -1,66 +0,0 @@
table {
table-layout: fixed;
width: 100%;
}
td {
width: 33%;
padding: 3px 4px;
border: 1px solid transparent;
vertical-align: top;
font: 1em monospace;
text-align: left;
white-space: pre-wrap;
}
h1 {
display: inline;
font-size: 100%;
}
del {
text-decoration: none;
color: #b30000;
background: #fadad7;
}
ins {
background: #eaf2c2;
color: #406619;
text-decoration: none;
}
#result {
white-space: pre-wrap;
}
#settings {
background: rgba(0,0,0,.05);
padding: 1em;
border-radius: 10px;
margin-bottom: 1em;
color: #fff;
font-size: 80%;
}
#settings label {
margin-left: 1em;
display: inline-block;
font-weight: normal;
}
.source {
position: absolute;
right: 1%;
top: .2em;
}
@-moz-document url-prefix() {
body {
height: 99%; /* Hide scroll bar in Firefox */
}
}
#diff-ui {
background: #fff;
padding: 2em;
margin: 1em;
border-radius: 5px;
font-size: 9px;
}

View File

@@ -1,268 +0,0 @@
/*
* -- BASE STYLES --
* Most of these are inherited from Base, but I want to change a few.
*/
body {
color: #333;
background: #262626;
}
.pure-table-even {
background: #fff;
}
/* Some styles from https://css-tricks.com/ */
a {
text-decoration: none;
color: #1b98f8;
}
a.github-link {
color: #fff;
}
.pure-menu-horizontal {
background: #fff;
padding: 5px;
display: flex;
justify-content: space-between;
border-bottom: 2px solid #ed5900;
align-items: center;
}
section.content {
padding-top: 5em;
padding-bottom: 5em;
flex-direction: column;
display: flex;
align-items: center;
justify-content: center;
}
.pure-table.watch-table td {
font-size: 80%;
}
/* table related */
.watch-table {
width: 100%;
}
.watch-table tr.unviewed {
font-weight: bold;
}
.watch-tag-list {
color: #e70069;
white-space: nowrap;
}
.box {
max-width: 80%;
flex-direction: column;
display: flex;
justify-content: center;
}
.watch-table .error {
color: #a00;
}
.watch-table td {
white-space: nowrap;
}
.watch-table td.title-col {
word-break: break-all;
white-space: normal;
}
.watch-table th {
white-space: nowrap;
}
.watch-table .title-col a[target="_blank"]::after, .current-diff-url::after {
content: url(data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAAoAAAAKCAYAAACNMs+9AAAAQElEQVR42qXKwQkAIAxDUUdxtO6/RBQkQZvSi8I/pL4BoGw/XPkh4XigPmsUgh0626AjRsgxHTkUThsG2T/sIlzdTsp52kSS1wAAAABJRU5ErkJggg==);
margin: 0 3px 0 5px;
}
#post-list-buttons {
text-align: right;
padding: 0px;
margin: 0px;
}
#post-list-buttons li {
display: inline-block;
}
#post-list-buttons a {
border-top-left-radius: initial;
border-top-right-radius: initial;
border-bottom-left-radius: 5px;
border-bottom-right-radius: 5px;
}
body:after {
content: "";
background: linear-gradient(130deg, #ff7a18, #af002d 41.07%, #319197 76.05%)
}
body:after, body:before {
display: block;
height: 600px;
position: absolute;
top: 0;
left: 0;
width: 100%;
z-index: -1;
}
body::after {
opacity: 0.91;
}
body::before {
content: "";
background-image: url(/static/images/gradient-border.png);
}
body:before {
background-size: cover
}
body:after, body:before {
-webkit-clip-path: polygon(100% 0, 0 0, 0 77.5%, 1% 77.4%, 2% 77.1%, 3% 76.6%, 4% 75.9%, 5% 75.05%, 6% 74.05%, 7% 72.95%, 8% 71.75%, 9% 70.55%, 10% 69.3%, 11% 68.05%, 12% 66.9%, 13% 65.8%, 14% 64.8%, 15% 64%, 16% 63.35%, 17% 62.85%, 18% 62.6%, 19% 62.5%, 20% 62.65%, 21% 63%, 22% 63.5%, 23% 64.2%, 24% 65.1%, 25% 66.1%, 26% 67.2%, 27% 68.4%, 28% 69.65%, 29% 70.9%, 30% 72.15%, 31% 73.3%, 32% 74.35%, 33% 75.3%, 34% 76.1%, 35% 76.75%, 36% 77.2%, 37% 77.45%, 38% 77.5%, 39% 77.3%, 40% 76.95%, 41% 76.4%, 42% 75.65%, 43% 74.75%, 44% 73.75%, 45% 72.6%, 46% 71.4%, 47% 70.15%, 48% 68.9%, 49% 67.7%, 50% 66.55%, 51% 65.5%, 52% 64.55%, 53% 63.75%, 54% 63.15%, 55% 62.75%, 56% 62.55%, 57% 62.5%, 58% 62.7%, 59% 63.1%, 60% 63.7%, 61% 64.45%, 62% 65.4%, 63% 66.45%, 64% 67.6%, 65% 68.8%, 66% 70.05%, 67% 71.3%, 68% 72.5%, 69% 73.6%, 70% 74.65%, 71% 75.55%, 72% 76.35%, 73% 76.9%, 74% 77.3%, 75% 77.5%, 76% 77.45%, 77% 77.25%, 78% 76.8%, 79% 76.2%, 80% 75.4%, 81% 74.45%, 82% 73.4%, 83% 72.25%, 84% 71.05%, 85% 69.8%, 86% 68.55%, 87% 67.35%, 88% 66.2%, 89% 65.2%, 90% 64.3%, 91% 63.55%, 92% 63%, 93% 62.65%, 94% 62.5%, 95% 62.55%, 96% 62.8%, 97% 63.3%, 98% 63.9%, 99% 64.75%, 100% 65.7%);
clip-path: polygon(100% 0, 0 0, 0 77.5%, 1% 77.4%, 2% 77.1%, 3% 76.6%, 4% 75.9%, 5% 75.05%, 6% 74.05%, 7% 72.95%, 8% 71.75%, 9% 70.55%, 10% 69.3%, 11% 68.05%, 12% 66.9%, 13% 65.8%, 14% 64.8%, 15% 64%, 16% 63.35%, 17% 62.85%, 18% 62.6%, 19% 62.5%, 20% 62.65%, 21% 63%, 22% 63.5%, 23% 64.2%, 24% 65.1%, 25% 66.1%, 26% 67.2%, 27% 68.4%, 28% 69.65%, 29% 70.9%, 30% 72.15%, 31% 73.3%, 32% 74.35%, 33% 75.3%, 34% 76.1%, 35% 76.75%, 36% 77.2%, 37% 77.45%, 38% 77.5%, 39% 77.3%, 40% 76.95%, 41% 76.4%, 42% 75.65%, 43% 74.75%, 44% 73.75%, 45% 72.6%, 46% 71.4%, 47% 70.15%, 48% 68.9%, 49% 67.7%, 50% 66.55%, 51% 65.5%, 52% 64.55%, 53% 63.75%, 54% 63.15%, 55% 62.75%, 56% 62.55%, 57% 62.5%, 58% 62.7%, 59% 63.1%, 60% 63.7%, 61% 64.45%, 62% 65.4%, 63% 66.45%, 64% 67.6%, 65% 68.8%, 66% 70.05%, 67% 71.3%, 68% 72.5%, 69% 73.6%, 70% 74.65%, 71% 75.55%, 72% 76.35%, 73% 76.9%, 74% 77.3%, 75% 77.5%, 76% 77.45%, 77% 77.25%, 78% 76.8%, 79% 76.2%, 80% 75.4%, 81% 74.45%, 82% 73.4%, 83% 72.25%, 84% 71.05%, 85% 69.8%, 86% 68.55%, 87% 67.35%, 88% 66.2%, 89% 65.2%, 90% 64.3%, 91% 63.55%, 92% 63%, 93% 62.65%, 94% 62.5%, 95% 62.55%, 96% 62.8%, 97% 63.3%, 98% 63.9%, 99% 64.75%, 100% 65.7%)
}
.button-small {
font-size: 85%;
}
.fetch-error {
padding-top: 1em;
font-size: 60%;
max-width: 400px;
display: block;
}
.edit-form {
background: #fff;
padding: 2em;
margin: 1em;
border-radius: 5px;
}
.button-secondary {
color: white;
border-radius: 4px;
text-shadow: 0 1px 1px rgba(0, 0, 0, 0.2);
}
.button-success {
background: rgb(28, 184, 65);
/* this is a green */
}
.button-tag {
background: rgb(99, 99, 99);
color: #fff;
font-size: 65%;
border-bottom-left-radius: initial;
border-bottom-right-radius: initial;
}
.button-tag.active {
background: #9c9c9c;
font-weight: bold;
}
.button-error {
background: rgb(202, 60, 60);
/* this is a maroon */
}
.button-warning {
background: rgb(223, 117, 20);
/* this is an orange */
}
.button-secondary {
background: rgb(66, 184, 221);
/* this is a light blue */
}
.button-cancel {
background: rgb(200, 200, 200);
/* this is a green */
}
.messages {
padding: 1em;
background: rgba(255,255,255,.2);
border-radius: 10px;
color: #fff;
font-weight: bold;
}
.pure-form label {
font-weight: bold;
}
#new-watch-form {
background: rgba(0,0,0,.05);
padding: 1em;
border-radius: 10px;
margin-bottom: 1em;
}
#new-watch-form legend {
color: #fff;
}
#diff-col {
padding-left:40px;
}
#diff-jump {
position: fixed;
left: 0px;
top: 80px;
background: #fff;
padding: 10px;
border-top-right-radius: 5px;
border-bottom-right-radius: 5px;
box-shadow: 5px 0 5px -2px #888;
}
#diff-jump a {
color: #1b98f8;
cursor: grabbing;
-moz-user-select: none;
-webkit-user-select: none;
-ms-user-select:none;
user-select:none;
-o-user-select:none;
}
footer {
padding: 10px;
background: #fff;
color: #444;
text-align: center;
}
#feed-icon {
vertical-align: middle;
}
#version {
position: absolute;
top: 80px;
right: 0px;
font-size: 8px;
background: #fff;
padding: 10px;
}
#new-version-text a{
color: #e07171;
}

Binary file not shown.

Before

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 43 KiB

File diff suppressed because it is too large Load Diff

View File

@@ -1,260 +0,0 @@
import json
import uuid as uuid_builder
import os.path
from os import path
from threading import Lock
from copy import deepcopy
import logging
import time
import threading
# Is there an existing library to ensure some data store (JSON etc) is in sync with CRUD methods?
# Open a github issue if you know something :)
# https://stackoverflow.com/questions/6190468/how-to-trigger-function-on-value-change
class ChangeDetectionStore:
lock = Lock()
def __init__(self, datastore_path="/datastore", include_default_watches=True):
self.needs_write = False
self.datastore_path = datastore_path
self.json_store_path = "{}/url-watches.json".format(self.datastore_path)
self.stop_thread = False
self.__data = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'settings': {
'headers': {
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36',
'Accept': 'text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8,application/signed-exchange;v=b3;q=0.9',
'Accept-Encoding': 'gzip, deflate', # No support for brolti in python requests yet.
'Accept-Language': 'en-GB,en-US;q=0.9,en;'
},
'requests': {
'timeout': 15, # Default 15 seconds
'minutes_between_check': 3 * 60, # Default 3 hours
'workers': 10 # Number of threads, lower is better for slow connections
}
}
}
# Base definition for all watchers
self.generic_definition = {
'url': None,
'tag': None,
'last_checked': 0,
'last_changed': 0,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'newest_history_key': "",
'title': None,
'previous_md5': "",
'uuid': str(uuid_builder.uuid4()),
'headers': {}, # Extra headers to send
'history': {}, # Dict of timestamp and output stripped filename
'ignore_text': [] # List of text to ignore when calculating the comparison checksum
}
if path.isfile('/source.txt'):
with open('/source.txt') as f:
# Should be set in Dockerfile to look for /source.txt , this will give us the git commit #
# So when someone gives us a backup file to examine, we know exactly what code they were running.
self.__data['build_sha'] = f.read()
try:
# @todo retest with ", encoding='utf-8'"
with open(self.json_store_path) as json_file:
from_disk = json.load(json_file)
# @todo isnt there a way todo this dict.update recursively?
# Problem here is if the one on the disk is missing a sub-struct, it wont be present anymore.
if 'watching' in from_disk:
self.__data['watching'].update(from_disk['watching'])
if 'settings' in from_disk:
if 'headers' in from_disk['settings']:
self.__data['settings']['headers'].update(from_disk['settings']['headers'])
if 'requests' in from_disk['settings']:
self.__data['settings']['requests'].update(from_disk['settings']['requests'])
# Reinitialise each `watching` with our generic_definition in the case that we add a new var in the future.
# @todo pretty sure theres a python we todo this with an abstracted(?) object!
for uuid, watch in self.__data['watching'].items():
_blank = deepcopy(self.generic_definition)
_blank.update(watch)
self.__data['watching'].update({uuid: _blank})
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
print("Watching:", uuid, self.__data['watching'][uuid]['url'])
# First time ran, doesnt exist.
except (FileNotFoundError, json.decoder.JSONDecodeError):
if include_default_watches:
print("Creating JSON store at", self.datastore_path)
self.add_watch(url='http://www.quotationspage.com/random.php', tag='test')
self.add_watch(url='https://news.ycombinator.com/', tag='Tech news')
self.add_watch(url='https://www.gov.uk/coronavirus', tag='Covid')
self.add_watch(url='https://changedetection.io', tag='Tech news')
self.__data['version_tag'] = "0.27"
if not 'app_guid' in self.__data:
self.__data['app_guid'] = str(uuid_builder.uuid4())
self.needs_write = True
# Finally start the thread that will manage periodic data saves to JSON
save_data_thread = threading.Thread(target=self.save_datastore).start()
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
def get_newest_history_key(self, uuid):
if len(self.__data['watching'][uuid]['history']) == 1:
return 0
dates = list(self.__data['watching'][uuid]['history'].keys())
# Convert to int, sort and back to str again
dates = [int(i) for i in dates]
dates.sort(reverse=True)
if len(dates):
# always keyed as str
return str(dates[0])
return 0
def set_last_viewed(self, uuid, timestamp):
self.data['watching'][uuid].update({'last_viewed': int(timestamp)})
self.needs_write = True
def update_watch(self, uuid, update_obj):
with self.lock:
# In python 3.9 we have the |= dict operator, but that still will lose data on nested structures...
for dict_key, d in self.generic_definition.items():
if isinstance(d, dict):
if update_obj is not None and dict_key in update_obj:
self.__data['watching'][uuid][dict_key].update(update_obj[dict_key])
del (update_obj[dict_key])
self.__data['watching'][uuid].update(update_obj)
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
self.needs_write = True
@property
def data(self):
has_unviewed = False
for uuid, v in self.__data['watching'].items():
self.__data['watching'][uuid]['newest_history_key'] = self.get_newest_history_key(uuid)
if int(v['newest_history_key']) <= int(v['last_viewed']):
self.__data['watching'][uuid]['viewed'] = True
else:
self.__data['watching'][uuid]['viewed'] = False
has_unviewed = True
self.__data['has_unviewed'] = has_unviewed
return self.__data
def get_all_tags(self):
tags = []
for uuid, watch in self.data['watching'].items():
# Support for comma separated list of tags.
for tag in watch['tag'].split(','):
tag = tag.strip()
if tag not in tags:
tags.append(tag)
tags.sort()
return tags
def delete(self, uuid):
with self.lock:
if uuid == 'all':
self.__data['watching'] = {}
else:
del (self.__data['watching'][uuid])
self.needs_write = True
def url_exists(self, url):
# Probably their should be dict...
for watch in self.data['watching']:
if watch['url'] == url:
return True
return False
def get_val(self, uuid, val):
# Probably their should be dict...
return self.data['watching'][uuid].get(val)
def add_watch(self, url, tag):
with self.lock:
# @todo use a common generic version of this
new_uuid = str(uuid_builder.uuid4())
_blank = deepcopy(self.generic_definition)
_blank.update({
'url': url,
'tag': tag,
'uuid': new_uuid
})
self.data['watching'][new_uuid] = _blank
# Get the directory ready
output_path = "{}/{}".format(self.datastore_path, new_uuid)
try:
os.mkdir(output_path)
except FileExistsError:
print(output_path, "already exists.")
self.sync_to_json()
return new_uuid
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, uuid, result_obj, contents):
output_path = "{}/{}".format(self.datastore_path, uuid)
fname = "{}/{}-{}.stripped.txt".format(output_path, result_obj['previous_md5'], str(time.time()))
with open(fname, 'w') as f:
f.write(contents)
f.close()
# Update history with the stripped text for future reference, this will also mean we save the first
# Should always be keyed by string(timestamp)
self.update_watch(uuid, {"history": {str(result_obj["last_checked"]): fname}})
return fname
def sync_to_json(self):
print("Saving..")
with open(self.json_store_path, 'w') as json_file:
json.dump(self.__data, json_file, indent=4)
logging.info("Re-saved index")
self.needs_write = False
# Thread runner, this helps with thread/write issues when there are many operations that want to update the JSON
# by just running periodically in one thread, according to python, dict updates are threadsafe.
def save_datastore(self):
while True:
if self.stop_thread:
print("Shutting down datastore thread")
return
if self.needs_write:
self.sync_to_json()
time.sleep(1)
# body of the constructor

View File

@@ -1,76 +0,0 @@
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<meta name="description" content="Self hosted website change detection.">
<title>Change Detection</title>
<link rel="stylesheet" href="/static/css/pure-min.css">
<link rel="stylesheet" href="/static/css/styles.css?ver=1000">
{% if extra_stylesheets %}
{% for m in extra_stylesheets %}
<link rel="stylesheet" href="{{ m }}?ver=1000">
{% endfor %}
{% endif %}
</head>
<body>
<div class="header">
<div class="home-menu pure-menu pure-menu-horizontal pure-menu-fixed">
<a class="pure-menu-heading" href="/"><strong>Change</strong>Detection.io</a>
{% if current_diff_url %}
<a class=current-diff-url href="{{ current_diff_url }}"><span style="max-width: 30%; overflow: hidden;">{{ current_diff_url }}</span></a>
{% else %}
{% if new_version_available %}
<span id="new-version-text" class="pure-menu-heading"><a href="https://github.com/dgtlmoon/changedetection.io">A new version is available</a></span>
{% endif %}
{% endif %}
<ul class="pure-menu-list">
<li class="pure-menu-item">
<a href="/backup" class="pure-menu-link">BACKUP</a>
</li>
<li class="pure-menu-item">
<a href="/import" class="pure-menu-link">IMPORT</a>
</li>
<li class="pure-menu-item">
<a href="/settings" class="pure-menu-link">SETTINGS</a>
</li>
<li class="pure-menu-item"><a class="github-link" href="https://github.com/dgtlmoon/changedetection.io">
<svg class="octicon octicon-mark-github v-align-middle" height="32" viewBox="0 0 16 16"
version="1.1"
width="32" aria-hidden="true">
<path fill-rule="evenodd"
d="M8 0C3.58 0 0 3.58 0 8c0 3.54 2.29 6.53 5.47 7.59.4.07.55-.17.55-.38 0-.19-.01-.82-.01-1.49-2.01.37-2.53-.49-2.69-.94-.09-.23-.48-.94-.82-1.13-.28-.15-.68-.52-.01-.53.63-.01 1.08.58 1.23.82.72 1.21 1.87.87 2.33.66.07-.52.28-.87.51-1.07-1.78-.2-3.64-.89-3.64-3.95 0-.87.31-1.59.82-2.15-.08-.2-.36-1.02.08-2.12 0 0 .67-.21 2.2.82.64-.18 1.32-.27 2-.27.68 0 1.36.09 2 .27 1.53-1.04 2.2-.82 2.2-.82.44 1.1.16 1.92.08 2.12.51.56.82 1.27.82 2.15 0 3.07-1.87 3.75-3.65 3.95.29.25.54.73.54 1.48 0 1.07-.01 1.93-.01 2.2 0 .21.15.46.55.38A8.013 8.013 0 0016 8c0-4.42-3.58-8-8-8z"></path>
</svg>
</a></li>
<!--
<li class="pure-menu-item"><a href="#" class="pure-menu-link">Tour</a></li>
<li class="pure-menu-item"><a href="#" class="pure-menu-link">Sign Up</a></li>
-->
</ul>
</div>
</div>
<div id="version">v{{ version }}</div>
<section class="content">
<header>
{% block header %}{% endblock %}
</header>
{% if messages %}
<div class="messages">
{% for message in messages %}
<div class="flash-message {{ message['class'] }}">{{ message['message'] }}</div>
{% endfor %}
</div>
{% endif %}
{% block content %}
{% endblock %}
</section>
</body>
</html>

View File

@@ -1,165 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div id="settings">
<h1>Differences</h1>
<form class="pure-form " action="" method="GET">
<fieldset>
<label for="diffWords" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffWords" value="diffWords" /> Words</label>
<label for="diffLines" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffLines" value="diffLines" checked=""/> Lines</label>
<label for="diffChars" class="pure-checkbox">
<input type="radio" name="diff_type" id="diffChars" value="diffChars"/> Chars</label>
{% if versions|length >= 1 %}
<label for="diff-version">Compare newest (<span id="current-v-date"></span>) with</label>
<select id="diff-version" name="previous_version">
{% for version in versions %}
<option value="{{version}}" {% if version== current_previous_version %} selected="" {% endif %}>
{{version}}
</option>
{% endfor %}
</select>
<button type="submit" class="pure-button pure-button-primary">Go</button>
{% endif %}
</fieldset>
</form>
<del>Removed text</del>
<ins>Inserted Text</ins>
</div>
<div id="diff-jump">
<a onclick="next_diff();">Jump</a>
</div>
<div id="diff-ui">
<table>
<tbody>
<tr>
<!-- just proof of concept copied straight from github.com/kpdecker/jsdiff -->
<td id="a" style="display: none;">{{previous}}</td>
<td id="b" style="display: none;">{{newest}}</td>
<td id="diff-col">
<span id="result"></span>
</td>
</tr>
</tbody>
</table>
Diff algorithm from the amazing <a href="https://github.com/kpdecker/jsdiff">github.com/kpdecker/jsdiff</a>
</div>
<script src="/static/js/diff.js"></script>
<script defer="">
var a = document.getElementById('a');
var b = document.getElementById('b');
var result = document.getElementById('result');
function changed() {
var diff = JsDiff[window.diffType](a.textContent, b.textContent);
var fragment = document.createDocumentFragment();
for (var i=0; i < diff.length; i++) {
if (diff[i].added && diff[i + 1] && diff[i + 1].removed) {
var swap = diff[i];
diff[i] = diff[i + 1];
diff[i + 1] = swap;
}
var node;
if (diff[i].removed) {
node = document.createElement('del');
node.classList.add("change");
node.appendChild(document.createTextNode(diff[i].value));
} else if (diff[i].added) {
node = document.createElement('ins');
node.classList.add("change");
node.appendChild(document.createTextNode(diff[i].value));
} else {
node = document.createTextNode(diff[i].value);
}
fragment.appendChild(node);
}
result.textContent = '';
result.appendChild(fragment);
}
window.onload = function() {
/* Convert what is options from UTC time.time() to local browser time */
var diffList=document.getElementById("diff-version");
if (typeof(diffList) != 'undefined' && diffList != null) {
for (var option of diffList.options) {
var dateObject = new Date(option.value*1000);
option.label=dateObject.toLocaleString();
}
}
/* Set current version date as local time in the browser also */
var current_v = document.getElementById("current-v-date");
var dateObject = new Date({{ newest_version_timestamp }}*1000);
current_v.innerHTML=dateObject.toLocaleString();
onDiffTypeChange(document.querySelector('#settings [name="diff_type"]:checked'));
changed();
};
a.onpaste = a.onchange =
b.onpaste = b.onchange = changed;
if ('oninput' in a) {
a.oninput = b.oninput = changed;
} else {
a.onkeyup = b.onkeyup = changed;
}
function onDiffTypeChange(radio) {
window.diffType = radio.value;
document.title = "Diff " + radio.value.slice(4);
}
var radio = document.getElementsByName('diff_type');
for (var i = 0; i < radio.length; i++) {
radio[i].onchange = function(e) {
onDiffTypeChange(e.target);
changed();
}
}
var inputs = document.getElementsByClassName('change');
inputs.current=0;
function next_diff() {
var element = inputs[inputs.current];
var headerOffset = 80;
var elementPosition = element.getBoundingClientRect().top;
var offsetPosition = elementPosition - headerOffset + window.scrollY;
window.scrollTo({
top: offsetPosition,
behavior: "smooth"
});
inputs.current++;
if(inputs.current >= inputs.length) {
inputs.current=0;
}
}
</script>
{% endblock %}

View File

@@ -1,73 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/edit/{{uuid}}" method="POST">
<fieldset>
<div class="pure-control-group">
<label for="url">URL</label>
<input type="url" id="url" required="" placeholder="https://..." name="url" value="{{ watch.url}}"
size="50"/>
<span class="pure-form-message-inline">This is a required field.</span>
</div>
<div class="pure-control-group">
<label for="tag">Tag</label>
<input type="text" placeholder="tag" size="10" id="tag" name="tag" value="{{ watch.tag}}"/>
<span class="pure-form-message-inline">Grouping tags, can be a comma separated list.</span>
</div>
<!-- @todo: move to tabs --->
<fieldset class="pure-group">
<label for="ignore-text">Ignore text</label>
<textarea id="ignore-text" name="ignore-text" class="pure-input-1-2" placeholder=""
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="5">{% for value in watch.ignore_text %}{{ value }}
{% endfor %}</textarea>
<span class="pure-form-message-inline">Each line will be processed separately as an ignore rule.</span>
</fieldset>
<!-- @todo: move to tabs --->
<fieldset class="pure-group">
<label for="headers">Extra request headers</label>
<textarea id="headers" name="headers" class="pure-input-1-2" placeholder="Example
Cookie: foobar
User-Agent: wonderbra 1.0"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="5">{% for key, value in watch.headers.items() %}{{ key }}: {{ value }}
{% endfor %}</textarea>
<br/>
</fieldset>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Cancel</a>
<a href="/api/delete?uuid={{uuid}}"
class="pure-button button-small button-error ">Delete</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,26 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-aligned" action="/import" method="POST">
<fieldset class="pure-group">
<legend>One URL per line, URLs that do not pass validation will stay in the textarea.</legend>
<textarea name="urls" class="pure-input-1-2" placeholder="https://"
style="width: 100%;
font-family:monospace;
white-space: pre;
overflow-wrap: normal;
overflow-x: scroll;" rows="25">{{ remaining }}</textarea>
</fieldset>
<button type="submit" class="pure-button pure-input-1-2 pure-button-primary">Import</button>
</form>
</div>
{% endblock %}

View File

@@ -1,43 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/scrub" method="POST">
<fieldset>
<div class="pure-control-group">
This will remove all version snapshots/data, but keep your list of URLs. <br/>
You may like to use the <strong>BACKUP</strong> link first.<br/>
Type in the word <strong>scrub</strong> to confirm that you understand!
<br/>
</div>
<div class="pure-control-group">
<br/>
<label for="confirmtext">Confirm</label><br/>
<input type="text" id="confirmtext" required="" name="confirmtext" value="" size="10"/>
<br/>
</div>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Scrub!</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Cancel</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,35 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="edit-form">
<form class="pure-form pure-form-stacked" action="/settings" method="POST">
<fieldset>
<div class="pure-control-group">
<label for="minutes">Maximum time in minutes until recheck.</label>
<input type="text" id="minutes" required="" name="minutes" value="{{minutes}}"
size="5"/>
<span class="pure-form-message-inline">This is a required field.</span>
</div>
<br/>
<div class="pure-control-group">
<button type="submit" class="pure-button pure-button-primary">Save</button>
</div>
<br/>
<div class="pure-control-group">
<a href="/" class="pure-button button-small button-cancel">Back</a>
<a href="/scrub" class="pure-button button-small button-cancel">Reset all version data</a>
</div>
</fieldset>
</form>
</div>
{% endblock %}

View File

@@ -1,90 +0,0 @@
{% extends 'base.html' %}
{% block content %}
<div class="box">
<form class="pure-form" action="/api/add" method="POST" id="new-watch-form">
<fieldset>
<legend>Add a new change detection watch</legend>
<input type="url" placeholder="https://..." name="url"/>
<input type="text" placeholder="tag" size="10" name="tag" value="{{active_tag if active_tag}}"/>
<button type="submit" class="pure-button pure-button-primary">Watch</button>
</fieldset>
<!-- add extra stuff, like do a http POST and send headers -->
<!-- user/pass r = requests.get('https://api.github.com/user', auth=('user', 'pass')) -->
</form>
<div>
<a href="/" class="pure-button button-tag {{'active' if not active_tag }}">All</a>
{% for tag in tags %}
{% if tag != "" %}
<a href="/?tag={{ tag}}" class="pure-button button-tag {{'active' if active_tag == tag }}">{{ tag }}</a>
{% endif %}
{% endfor %}
</div>
<div id="watch-table-wrapper">
<table class="pure-table pure-table-striped watch-table">
<thead>
<tr>
<th>#</th>
<th></th>
<th>Last Checked</th>
<th>Last Changed</th>
<th></th>
</tr>
</thead>
<tbody>
{% for watch in watches %}
<tr id="{{ watch.uuid }}"
class="{{ loop.cycle('pure-table-odd', 'pure-table-even') }}
{% if watch.last_error is defined and watch.last_error != False %}error{% endif %}
{% if watch.newest_history_key| int > watch.last_viewed| int %}unviewed{% endif %}">
<td>{{ loop.index }}</td>
<td class="title-col">{{watch.title if watch.title is not none else watch.url}}
<a class="external" target=_blank href="{{ watch.url }}"></a>
{% if watch.last_error is defined and watch.last_error != False %}
<div class="fetch-error">{{ watch.last_error }}</div>
{% endif %}
{% if not active_tag %}
<span class="watch-tag-list">{{ watch.tag}}</span>
{% endif %}
</td>
<td>{{watch|format_last_checked_time}}</td>
<td>{% if watch.history|length >= 2 and watch.last_changed %}
{{watch.last_changed|format_timestamp_timeago}}
{% else %}
Not yet
{% endif %}
</td>
<td>
<a href="/api/checknow?uuid={{ watch.uuid}}{% if request.args.get('tag') %}&tag={{request.args.get('tag')}}{% endif %}"
class="pure-button button-small pure-button-primary">Recheck</a>
<a href="/edit/{{ watch.uuid}}" class="pure-button button-small pure-button-primary">Edit</a>
{% if watch.history|length >= 2 %}
<a href="/diff/{{ watch.uuid}}" class="pure-button button-small pure-button-primary">Diff</a>
{% endif %}
</td>
</tr>
{% endfor %}
</tbody>
</table>
<ul id="post-list-buttons">
{% if has_unviewed %}
<li>
<a href="/api/mark-all-viewed" class="pure-button button-tag ">Mark all viewed</a>
</li>
{% endif %}
<li>
<a href="/api/checknow{% if active_tag%}?tag={{active_tag}}{%endif%}" class="pure-button button-tag ">Recheck
all {% if active_tag%}in "{{active_tag}}"{%endif%}</a>
</li>
<li>
<a href="{{ url_for('index', tag=active_tag , rss=true)}}"><img id="feed-icon" src="/static/images/Generic_Feed-icon.svg" height="15px"></a>
</li>
</ul>
</div>
</div>
{% endblock %}

View File

@@ -1,123 +0,0 @@
#!/usr/bin/python3
import time
from flask import url_for
from urllib.request import urlopen
import pytest
sleep_time_for_fetch_thread = 3
def test_setup_liveserver(live_server):
@live_server.app.route('/test-endpoint')
def test_endpoint():
# Tried using a global var here but didn't seem to work, so reading from a file instead.
with open("test-datastore/output.txt", "r") as f:
return f.read()
live_server.start()
assert 1 == 1
def set_original_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>Which is across multiple lines</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/output.txt", "w") as f:
f.write(test_return_data)
def set_modified_response():
test_return_data = """<html>
<body>
Some initial text</br>
<p>which has this one new line</p>
</br>
So let's see what happens. </br>
</body>
</html>
"""
with open("test-datastore/output.txt", "w") as f:
f.write(test_return_data)
def test_check_basic_change_detection_functionality(client, live_server):
set_original_response()
# Add our URL to the import page
res = client.post(
url_for("import_page"),
data={"urls": url_for('test_endpoint', _external=True)},
follow_redirects=True
)
assert b"1 Imported" in res.data
time.sleep(sleep_time_for_fetch_thread)
# Do this a few times.. ensures we dont accidently set the status
for n in range(3):
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'test-endpoint' in res.data
#####################
# Make a change
set_modified_response()
res = urlopen(url_for('test_endpoint', _external=True))
assert b'which has this one new line' in res.read()
# Force recheck
res = client.get(url_for("api_watch_checknow"), follow_redirects=True)
assert b'1 watches are rechecking.' in res.data
time.sleep(sleep_time_for_fetch_thread)
# Now something should be ready, indicated by having a 'unviewed' class
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Following the 'diff' link, it should no longer display as 'unviewed' even after we recheck it a few times
res = client.get(url_for("diff_history_page", uuid="first"))
assert b'Compare newest' in res.data
time.sleep(2)
# Do this a few times.. ensures we dont accidently set the status
for n in range(2):
client.get(url_for("api_watch_checknow"), follow_redirects=True)
# Give the thread time to pick it up
time.sleep(sleep_time_for_fetch_thread)
# It should report nothing found (no new 'unviewed' class)
res = client.get(url_for("index"))
assert b'unviewed' not in res.data
assert b'test-endpoint' in res.data
set_original_response()
client.get(url_for("api_watch_checknow"), follow_redirects=True)
time.sleep(sleep_time_for_fetch_thread)
res = client.get(url_for("index"))
assert b'unviewed' in res.data
# Cleanup everything
res = client.get(url_for("api_delete", uuid="all"), follow_redirects=True)
assert b'Deleted' in res.data

99
changedetection.py Normal file → Executable file
View File

@@ -1,73 +1,44 @@
#!/usr/bin/python3
# Launch as a eventlet.wsgi server instance.
# Entry-point for running from the CLI when not installed via Pip, Pip will handle the console_scripts entry_points's from setup.py
# It's recommended to use `pip3 install changedetection.io` and start with `changedetection.py` instead, it will be linkd to your global path.
# or Docker.
# Read more https://github.com/dgtlmoon/changedetection.io/wiki
import getopt
from changedetectionio import changedetection
import multiprocessing
import sys
import os
import eventlet
import eventlet.wsgi
import backend
def sigchld_handler(_signo, _stack_frame):
import sys
print('Shutdown: Got SIGCHLD')
# https://stackoverflow.com/questions/40453496/python-multiprocessing-capturing-signals-to-restart-child-processes-or-shut-do
pid, status = os.waitpid(-1, os.WNOHANG | os.WUNTRACED | os.WCONTINUED)
from backend import store
def main(argv):
ssl_mode = False
port = 5000
datastore_path = "./datastore"
try:
opts, args = getopt.getopt(argv, "sd:p:", "purge")
except getopt.GetoptError:
print('backend.py -s SSL enable -p [port] -d [datastore path]')
sys.exit(2)
for opt, arg in opts:
# if opt == '--purge':
# Remove history, the actual files you need to delete manually.
# for uuid, watch in datastore.data['watching'].items():
# watch.update({'history': {}, 'last_checked': 0, 'last_changed': 0, 'previous_md5': None})
if opt == '-s':
ssl_mode = True
if opt == '-p':
port = int(arg)
if opt == '-d':
datastore_path = arg
# threads can read from disk every x seconds right?
# front end can just save
# We just need to know which threads are looking at which UUIDs
# isnt there some @thingy to attach to each route to tell it, that this route needs a datastore
app_config = {'datastore_path': datastore_path}
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'])
app = backend.changedetection_app(app_config, datastore)
@app.context_processor
def inject_version():
return dict(version=datastore.data['version_tag'])
@app.context_processor
def inject_new_version_available():
return dict(new_version_available=app.config['NEW_VERSION_AVAILABLE'])
if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen(('', port)),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen(('', port)), app)
print('Sub-process: pid %d status %d' % (pid, status))
if status != 0:
sys.exit(1)
raise SystemExit
if __name__ == '__main__':
main(sys.argv[1:])
#signal.signal(signal.SIGCHLD, sigchld_handler)
# The only way I could find to get Flask to shutdown, is to wrap it and then rely on the subsystem issuing SIGTERM/SIGKILL
parse_process = multiprocessing.Process(target=changedetection.main)
parse_process.daemon = True
parse_process.start()
import time
try:
while True:
time.sleep(1)
if not parse_process.is_alive():
# Process died/crashed for some reason, exit with error set
sys.exit(1)
except KeyboardInterrupt:
#parse_process.terminate() not needed, because this process will issue it to the sub-process anyway
print ("Exited - CTRL+C")

2
changedetectionio/.gitignore vendored Normal file
View File

@@ -0,0 +1,2 @@
test-datastore
package-lock.json

File diff suppressed because it is too large Load Diff

View File

View File

@@ -0,0 +1,117 @@
# Responsible for building the storage dict into a set of rules ("JSON Schema") acceptable via the API
# Probably other ways to solve this when the backend switches to some ORM
def build_time_between_check_json_schema():
# Setup time between check schema
schema_properties_time_between_check = {
"type": "object",
"additionalProperties": False,
"properties": {}
}
for p in ['weeks', 'days', 'hours', 'minutes', 'seconds']:
schema_properties_time_between_check['properties'][p] = {
"anyOf": [
{
"type": "integer"
},
{
"type": "null"
}
]
}
return schema_properties_time_between_check
def build_watch_json_schema(d):
# Base JSON schema
schema = {
'type': 'object',
'properties': {},
}
for k, v in d.items():
# @todo 'integer' is not covered here because its almost always for internal usage
if isinstance(v, type(None)):
schema['properties'][k] = {
"anyOf": [
{"type": "null"},
]
}
elif isinstance(v, list):
schema['properties'][k] = {
"anyOf": [
{"type": "array",
# Always is an array of strings, like text or regex or something
"items": {
"type": "string",
"maxLength": 5000
}
},
]
}
elif isinstance(v, bool):
schema['properties'][k] = {
"anyOf": [
{"type": "boolean"},
]
}
elif isinstance(v, str):
schema['properties'][k] = {
"anyOf": [
{"type": "string",
"maxLength": 5000},
]
}
# Can also be a string (or None by default above)
for v in ['body',
'notification_body',
'notification_format',
'notification_title',
'proxy',
'tag',
'title',
'webdriver_js_execute_code'
]:
schema['properties'][v]['anyOf'].append({'type': 'string', "maxLength": 5000})
# None or Boolean
schema['properties']['track_ldjson_price_data']['anyOf'].append({'type': 'boolean'})
schema['properties']['method'] = {"type": "string",
"enum": ["GET", "POST", "DELETE", "PUT"]
}
schema['properties']['fetch_backend']['anyOf'].append({"type": "string",
"enum": ["html_requests", "html_webdriver"]
})
# All headers must be key/value type dict
schema['properties']['headers'] = {
"type": "object",
"patternProperties": {
# Should always be a string:string type value
".*": {"type": "string"},
}
}
from changedetectionio.notification import valid_notification_formats
schema['properties']['notification_format'] = {'type': 'string',
'enum': list(valid_notification_formats.keys())
}
# Stuff that shouldn't be available but is just state-storage
for v in ['previous_md5', 'last_error', 'has_ldjson_price_data', 'previous_md5_before_filters', 'uuid']:
del schema['properties'][v]
schema['properties']['webdriver_delay']['anyOf'].append({'type': 'integer'})
schema['properties']['time_between_check'] = build_time_between_check_json_schema()
# headers ?
return schema

View File

@@ -0,0 +1,279 @@
from flask_expects_json import expects_json
from changedetectionio import queuedWatchMetaData
from flask_restful import abort, Resource
from flask import request, make_response
import validators
from . import auth
import copy
# See docs/README.md for rebuilding the docs/apidoc information
from . import api_schema
# Build a JSON Schema atleast partially based on our Watch model
from changedetectionio.model.Watch import base_config as watch_base_config
schema = api_schema.build_watch_json_schema(watch_base_config)
schema_create_watch = copy.deepcopy(schema)
schema_create_watch['required'] = ['url']
schema_update_watch = copy.deepcopy(schema)
schema_update_watch['additionalProperties'] = False
class Watch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
# Get information about a single watch, excluding the history list (can be large)
# curl http://localhost:4000/api/v1/watch/<string:uuid>
# @todo - version2 - ?muted and ?paused should be able to be called together, return the watch struct not "OK"
# ?recheck=true
@auth.check_token
def get(self, uuid):
"""
@api {get} /api/v1/watch/:uuid Single watch information
@apiDescription Retrieve watch information and set muted/paused status
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?muted=unmuted" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
curl "http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091?paused=unpaused" -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiName Watch
@apiGroup Watch
@apiParam {uuid} uuid Watch unique ID.
@apiQuery {Boolean} [recheck] Recheck this watch `recheck=1`
@apiQuery {String} [paused] =`paused` or =`unpaused` , Sets the PAUSED state
@apiQuery {String} [muted] =`muted` or =`unmuted` , Sets the MUTE NOTIFICATIONS state
@apiSuccess (200) {String} OK When paused/muted/recheck operation OR full JSON object of the watch
@apiSuccess (200) {JSON} WatchJSON JSON Full JSON object of the watch
"""
from copy import deepcopy
watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.args.get('recheck'):
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
return "OK", 200
if request.args.get('paused', '') == 'paused':
self.datastore.data['watching'].get(uuid).pause()
return "OK", 200
elif request.args.get('paused', '') == 'unpaused':
self.datastore.data['watching'].get(uuid).unpause()
return "OK", 200
if request.args.get('muted', '') == 'muted':
self.datastore.data['watching'].get(uuid).mute()
return "OK", 200
elif request.args.get('muted', '') == 'unmuted':
self.datastore.data['watching'].get(uuid).unmute()
return "OK", 200
# Return without history, get that via another API call
watch['history_n'] = watch.history_n
return watch
@auth.check_token
def delete(self, uuid):
"""
@api {delete} /api/v1/watch/:uuid Delete watch information
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X DELETE -H"x-api-key:813031b16330fe25e3780cf0325daa45"
@apiParam {uuid} uuid Watch unique ID.
@apiName Delete
@apiGroup Watch
@apiSuccess (200) {String} OK Was deleted
"""
if not self.datastore.data['watching'].get(uuid):
abort(400, message='No watch exists with the UUID of {}'.format(uuid))
self.datastore.delete(uuid)
return 'OK', 204
# Update an existing
@auth.check_token
@expects_json(schema_update_watch)
def put(self, uuid):
"""
@api {put} /api/v1/watch/:uuid Update watch information
@apiExample {curl} Example usage:
Create a watch (POST)
curl http://localhost:4000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "nice list"}'
Update (PUT)
curl http://localhost:4000/api/v1/watch/cc0cfffa-f449-477b-83ea-0caafd1dc091 -X PUT -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "new list"}'
@apiDescription Updates an existing watch using JSON, accepts the same structure as at https://github.com/dgtlmoon/changedetection.io/blob/fab7d325f764d6912bef671f1d78bf217689c537/changedetectionio/model/Watch.py#L15
@apiParam {uuid} uuid Watch unique ID.
@apiName Update
@apiGroup Watch
@apiSuccess (200) {String} OK Was updated
@apiSuccess (500) {String} ERR Some other error
"""
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if request.json.get('proxy'):
plist = self.datastore.proxy_list
if not request.json.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
watch.update(request.json)
return "OK", 200
class WatchHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
# Get a list of available history for a watch by UUID
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history
def get(self, uuid):
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
return watch.history, 200
class WatchSingleHistory(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
# Read a given history snapshot and return its content
# <string:timestamp> or "latest"
# curl http://localhost:4000/api/v1/watch/<string:uuid>/history/<int:timestamp>
@auth.check_token
def get(self, uuid, timestamp):
watch = self.datastore.data['watching'].get(uuid)
if not watch:
abort(404, message='No watch exists with the UUID of {}'.format(uuid))
if not len(watch.history):
abort(404, message='Watch found but no history exists for the UUID {}'.format(uuid))
if timestamp == 'latest':
timestamp = list(watch.history.keys())[-1]
with open(watch.history[timestamp], 'r') as f:
content = f.read()
response = make_response(content, 200)
response.mimetype = "text/plain"
return response
class CreateWatch(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
@auth.check_token
@expects_json(schema_create_watch)
def post(self):
"""
@api {post} /api/v1/watch Create a watch
@apiDescription requires `url`, Creates a watch, also accepts accepts the same structure as at https://github.com/dgtlmoon/changedetection.io/blob/fab7d325f764d6912bef671f1d78bf217689c537/changedetectionio/model/Watch.py#L15
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45" -H "Content-Type: application/json" -d '{"url": "https://my-nice.com" , "tag": "nice list"}'
@apiName Create
@apiGroup CreateWatch
@apiSuccess (200) {String} OK Was created
@apiSuccess (500) {String} ERR Some other error
"""
#
json_data = request.get_json()
url = json_data['url'].strip()
if not validators.url(json_data['url'].strip()):
return "Invalid or unsupported URL", 400
if json_data.get('proxy'):
plist = self.datastore.proxy_list
if not json_data.get('proxy') in plist:
return "Invalid proxy choice, currently supported proxies are '{}'".format(', '.join(plist)), 400
extras = copy.deepcopy(json_data)
del extras['url']
new_uuid = self.datastore.add_watch(url=url, extras=extras)
if new_uuid:
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': new_uuid, 'skip_when_checksum_same': True}))
return {'uuid': new_uuid}, 201
else:
return "Invalid or unsupported URL", 400
@auth.check_token
def get(self):
"""
@api {get} /api/v1/watch
@apiDescription Return concise list of available watches and some very basic info
@apiExample {curl} Example usage:
curl http://localhost:4000/api/v1/watch -H"x-api-key:813031b16330fe25e3780cf0325daa45"
recheck_all=1 to recheck all
@apiParam {String} [recheck_all] Optional Set to =1 to force recheck of all watches
@apiParam {String} [tag] Optional name of tag to limit results
@apiName ListWatches
@apiGroup CreateWatch
:return:
"""
list = {}
tag_limit = request.args.get('tag', None)
for k, watch in self.datastore.data['watching'].items():
if tag_limit:
if not tag_limit.lower() in watch.all_tags:
continue
list[k] = {'url': watch['url'],
'title': watch['title'],
'last_checked': watch['last_checked'],
'last_changed': watch.last_changed,
'last_error': watch['last_error']}
if request.args.get('recheck_all'):
for uuid in self.datastore.data['watching'].keys():
self.update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': True}))
return {'status': "OK"}, 200
return list, 200
class SystemInfo(Resource):
def __init__(self, **kwargs):
# datastore is a black box dependency
self.datastore = kwargs['datastore']
self.update_q = kwargs['update_q']
@auth.check_token
def get(self):
import time
overdue_watches = []
# Check all watches and report which have not been checked but should have been
for uuid, watch in self.datastore.data.get('watching', {}).items():
# see if now - last_checked is greater than the time that should have been
# this is not super accurate (maybe they just edited it) but better than nothing
t = watch.threshold_seconds()
if not t:
# Use the system wide default
t = self.datastore.threshold_seconds
time_since_check = time.time() - watch.get('last_checked')
# Allow 5 minutes of grace time before we decide it's overdue
if time_since_check - (5 * 60) > t:
overdue_watches.append(uuid)
return {
'queue_size': self.update_q.qsize(),
'overdue_watches': overdue_watches,
'uptime': round(time.time() - self.datastore.start_time, 2),
'watch_count': len(self.datastore.data.get('watching', {}))
}, 200

View File

@@ -0,0 +1,33 @@
from flask import request, make_response, jsonify
from functools import wraps
# Simple API auth key comparison
# @todo - Maybe short lived token in the future?
def check_token(f):
@wraps(f)
def decorated(*args, **kwargs):
datastore = args[0].datastore
config_api_token_enabled = datastore.data['settings']['application'].get('api_access_token_enabled')
if not config_api_token_enabled:
return
try:
api_key_header = request.headers['x-api-key']
except KeyError:
return make_response(
jsonify("No authorization x-api-key header."), 403
)
config_api_token = datastore.data['settings']['application'].get('api_access_token')
if api_key_header != config_api_token:
return make_response(
jsonify("Invalid access - API key invalid."), 403
)
return f(*args, **kwargs)
return decorated

View File

@@ -0,0 +1,11 @@
import apprise
# Create our AppriseAsset and populate it with some of our new values:
# https://github.com/caronc/apprise/wiki/Development_API#the-apprise-asset-object
asset = apprise.AppriseAsset(
image_url_logo='https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
)
asset.app_id = "changedetection.io"
asset.app_desc = "ChangeDetection.io best and simplest website monitoring and change detection"
asset.app_url = "https://changedetection.io"

View File

View File

@@ -0,0 +1,230 @@
# HORRIBLE HACK BUT WORKS :-) PR anyone?
#
# Why?
# `browsersteps_playwright_browser_interface.chromium.connect_over_cdp()` will only run once without async()
# - this flask app is not async()
# - browserless has a single timeout/keepalive which applies to the session made at .connect_over_cdp()
#
# So it means that we must unfortunately for now just keep a single timer since .connect_over_cdp() was run
# and know when that reaches timeout/keepalive :( when that time is up, restart the connection and tell the user
# that their time is up, insert another coin. (reload)
#
# Bigger picture
# - It's horrible that we have this click+wait deal, some nice socket.io solution using something similar
# to what the browserless debug UI already gives us would be smarter..
#
# OR
# - Some API call that should be hacked into browserless or playwright that we can "/api/bump-keepalive/{session_id}/60"
# So we can tell it that we need more time (run this on each action)
#
# OR
# - use multiprocessing to bump this over to its own process and add some transport layer (queue/pipes)
from distutils.util import strtobool
from flask import Blueprint, request, make_response
import os
import logging
from changedetectionio.store import ChangeDetectionStore
from changedetectionio import login_optionally_required
browsersteps_live_ui_o = {}
browsersteps_playwright_browser_interface = None
browsersteps_playwright_browser_interface_browser = None
browsersteps_playwright_browser_interface_context = None
browsersteps_playwright_browser_interface_end_time = None
browsersteps_playwright_browser_interface_start_time = None
def cleanup_playwright_session():
global browsersteps_live_ui_o
global browsersteps_playwright_browser_interface
global browsersteps_playwright_browser_interface_browser
global browsersteps_playwright_browser_interface_context
global browsersteps_playwright_browser_interface_end_time
global browsersteps_playwright_browser_interface_start_time
browsersteps_live_ui_o = {}
browsersteps_playwright_browser_interface = None
browsersteps_playwright_browser_interface_browser = None
browsersteps_playwright_browser_interface_end_time = None
browsersteps_playwright_browser_interface_start_time = None
print("Cleaning up old playwright session because time was up, calling .goodbye()")
try:
browsersteps_playwright_browser_interface_context.goodbye()
except Exception as e:
print ("Got exception in shutdown, probably OK")
print (str(e))
browsersteps_playwright_browser_interface_context = None
print ("Cleaning up old playwright session because time was up - done")
def construct_blueprint(datastore: ChangeDetectionStore):
browser_steps_blueprint = Blueprint('browser_steps', __name__, template_folder="templates")
@login_optionally_required
@browser_steps_blueprint.route("/browsersteps_update", methods=['GET', 'POST'])
def browsersteps_ui_update():
import base64
import playwright._impl._api_types
import time
from changedetectionio.blueprint.browser_steps import browser_steps
global browsersteps_live_ui_o, browsersteps_playwright_browser_interface_end_time
global browsersteps_playwright_browser_interface_browser
global browsersteps_playwright_browser_interface
global browsersteps_playwright_browser_interface_start_time
step_n = None
remaining =0
uuid = request.args.get('uuid')
browsersteps_session_id = request.args.get('browsersteps_session_id')
if not browsersteps_session_id:
return make_response('No browsersteps_session_id specified', 500)
# Because we don't "really" run in a context manager ( we make the playwright interface global/long-living )
# We need to manage the shutdown when the time is up
if browsersteps_playwright_browser_interface_end_time:
remaining = browsersteps_playwright_browser_interface_end_time-time.time()
if browsersteps_playwright_browser_interface_end_time and remaining <= 0:
cleanup_playwright_session()
return make_response('Browser session expired, please reload the Browser Steps interface', 401)
# Actions - step/apply/etc, do the thing and return state
if request.method == 'POST':
# @todo - should always be an existing session
step_operation = request.form.get('operation')
step_selector = request.form.get('selector')
step_optional_value = request.form.get('optional_value')
step_n = int(request.form.get('step_n'))
is_last_step = strtobool(request.form.get('is_last_step'))
if step_operation == 'Goto site':
step_operation = 'goto_url'
step_optional_value = None
step_selector = datastore.data['watching'][uuid].get('url')
# @todo try.. accept.. nice errors not popups..
try:
this_session = browsersteps_live_ui_o.get(browsersteps_session_id)
if not this_session:
print("Browser exited")
return make_response('Browser session ran out of time :( Please reload this page.', 401)
this_session.call_action(action_name=step_operation,
selector=step_selector,
optional_value=step_optional_value)
except Exception as e:
print("Exception when calling step operation", step_operation, str(e))
# Try to find something of value to give back to the user
return make_response(str(e).splitlines()[0], 401)
# Get visual selector ready/update its data (also use the current filter info from the page?)
# When the last 'apply' button was pressed
# @todo this adds overhead because the xpath selection is happening twice
u = this_session.page.url
if is_last_step and u:
(screenshot, xpath_data) = this_session.request_visualselector_data()
datastore.save_screenshot(watch_uuid=uuid, screenshot=screenshot)
datastore.save_xpath_data(watch_uuid=uuid, data=xpath_data)
# Setup interface
if request.method == 'GET':
if not browsersteps_playwright_browser_interface:
print("Starting connection with playwright")
logging.debug("browser_steps.py connecting")
global browsersteps_playwright_browser_interface_context
from . import nonContext
browsersteps_playwright_browser_interface_context = nonContext.c_sync_playwright()
browsersteps_playwright_browser_interface = browsersteps_playwright_browser_interface_context.start()
time.sleep(1)
# At 20 minutes, some other variable is closing it
# @todo find out what it is and set it
seconds_keepalive = int(os.getenv('BROWSERSTEPS_MINUTES_KEEPALIVE', 10)) * 60
# keep it alive for 10 seconds more than we advertise, sometimes it helps to keep it shutting down cleanly
keepalive = "&timeout={}".format(((seconds_keepalive+3) * 1000))
try:
browsersteps_playwright_browser_interface_browser = browsersteps_playwright_browser_interface.chromium.connect_over_cdp(
os.getenv('PLAYWRIGHT_DRIVER_URL', '') + keepalive)
except Exception as e:
if 'ECONNREFUSED' in str(e):
return make_response('Unable to start the Playwright session properly, is it running?', 401)
browsersteps_playwright_browser_interface_end_time = time.time() + (seconds_keepalive-3)
print("Starting connection with playwright - done")
if not browsersteps_live_ui_o.get(browsersteps_session_id):
# Boot up a new session
proxy_id = datastore.get_preferred_proxy_for_watch(uuid=uuid)
proxy = None
if proxy_id:
proxy_url = datastore.proxy_list.get(proxy_id).get('url')
if proxy_url:
proxy = {'server': proxy_url}
print("Browser Steps: UUID {} Using proxy {}".format(uuid, proxy_url))
# Begin the new "Playwright Context" that re-uses the playwright interface
# Each session is a "Playwright Context" as a list, that uses the playwright interface
browsersteps_live_ui_o[browsersteps_session_id] = browser_steps.browsersteps_live_ui(
playwright_browser=browsersteps_playwright_browser_interface_browser,
proxy=proxy)
this_session = browsersteps_live_ui_o[browsersteps_session_id]
if not this_session.page:
cleanup_playwright_session()
return make_response('Browser session ran out of time :( Please reload this page.', 401)
response = None
if request.method == 'POST':
# Screenshots and other info only needed on requesting a step (POST)
try:
state = this_session.get_current_state()
except playwright._impl._api_types.Error as e:
return make_response("Browser session ran out of time :( Please reload this page."+str(e), 401)
# Use send_file() which is way faster than read/write loop on bytes
import json
from tempfile import mkstemp
from flask import send_file
tmp_fd, tmp_file = mkstemp(text=True, suffix=".json", prefix="changedetectionio-")
output = json.dumps({'screenshot': "data:image/jpeg;base64,{}".format(
base64.b64encode(state[0]).decode('ascii')),
'xpath_data': state[1],
'session_age_start': this_session.age_start,
'browser_time_remaining': round(remaining)
})
with os.fdopen(tmp_fd, 'w') as f:
f.write(output)
response = make_response(send_file(path_or_file=tmp_file,
mimetype='application/json; charset=UTF-8',
etag=True))
# No longer needed
os.unlink(tmp_file)
elif request.method == 'GET':
# Just enough to get the session rolling, it will call for goto-site via POST next
response = make_response({
'session_age_start': this_session.age_start,
'browser_time_remaining': round(remaining)
})
return response
return browser_steps_blueprint

View File

@@ -0,0 +1,268 @@
#!/usr/bin/python3
import os
import time
import re
from random import randint
# Two flags, tell the JS which of the "Selector" or "Value" field should be enabled in the front end
# 0- off, 1- on
browser_step_ui_config = {'Choose one': '0 0',
# 'Check checkbox': '1 0',
# 'Click button containing text': '0 1',
# 'Scroll to bottom': '0 0',
# 'Scroll to element': '1 0',
# 'Scroll to top': '0 0',
# 'Switch to iFrame by index number': '0 1'
# 'Uncheck checkbox': '1 0',
# @todo
'Check checkbox': '1 0',
'Click X,Y': '0 1',
'Click element if exists': '1 0',
'Click element': '1 0',
'Click element containing text': '0 1',
'Enter text in field': '1 1',
'Execute JS': '0 1',
# 'Extract text and use as filter': '1 0',
'Goto site': '0 0',
'Press Enter': '0 0',
'Select by label': '1 1',
'Scroll down': '0 0',
'Uncheck checkbox': '1 0',
'Wait for seconds': '0 1',
'Wait for text': '0 1',
# 'Press Page Down': '0 0',
# 'Press Page Up': '0 0',
# weird bug, come back to it later
}
# Good reference - https://playwright.dev/python/docs/input
# https://pythonmana.com/2021/12/202112162236307035.html
#
# ONLY Works in Playwright because we need the fullscreen screenshot
class steppable_browser_interface():
page = None
# Convert and perform "Click Button" for example
def call_action(self, action_name, selector=None, optional_value=None):
now = time.time()
call_action_name = re.sub('[^0-9a-zA-Z]+', '_', action_name.lower())
if call_action_name == 'choose_one':
return
print("> action calling", call_action_name)
# https://playwright.dev/python/docs/selectors#xpath-selectors
if selector.startswith('/') and not selector.startswith('//'):
selector = "xpath=" + selector
action_handler = getattr(self, "action_" + call_action_name)
# Support for Jinja2 variables in the value and selector
from jinja2 import Environment
jinja2_env = Environment(extensions=['jinja2_time.TimeExtension'])
if selector and ('{%' in selector or '{{' in selector):
selector = str(jinja2_env.from_string(selector).render())
if optional_value and ('{%' in optional_value or '{{' in optional_value):
optional_value = str(jinja2_env.from_string(optional_value).render())
action_handler(selector, optional_value)
self.page.wait_for_timeout(3 * 1000)
print("Call action done in", time.time() - now)
def action_goto_url(self, url, optional_value):
# self.page.set_viewport_size({"width": 1280, "height": 5000})
now = time.time()
response = self.page.goto(url, timeout=0, wait_until='commit')
# Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading.
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
# This seemed to solve nearly all 'TimeoutErrors'
print("Time to goto URL ", time.time() - now)
def action_click_element_containing_text(self, selector=None, value=''):
if not len(value.strip()):
return
elem = self.page.get_by_text(value)
if elem.count():
elem.first.click(delay=randint(200, 500), timeout=3000)
def action_enter_text_in_field(self, selector, value):
if not len(selector.strip()):
return
self.page.fill(selector, value, timeout=10 * 1000)
def action_execute_js(self, selector, value):
self.page.evaluate(value)
def action_click_element(self, selector, value):
print("Clicking element")
if not len(selector.strip()):
return
self.page.click(selector, timeout=10 * 1000, delay=randint(200, 500))
def action_click_element_if_exists(self, selector, value):
import playwright._impl._api_types as _api_types
print("Clicking element if exists")
if not len(selector.strip()):
return
try:
self.page.click(selector, timeout=10 * 1000, delay=randint(200, 500))
except _api_types.TimeoutError as e:
return
except _api_types.Error as e:
# Element was there, but page redrew and now its long long gone
return
def action_click_x_y(self, selector, value):
x, y = value.strip().split(',')
x = int(float(x.strip()))
y = int(float(y.strip()))
self.page.mouse.click(x=x, y=y, delay=randint(200, 500))
def action_scroll_down(self, selector, value):
# Some sites this doesnt work on for some reason
self.page.mouse.wheel(0, 600)
self.page.wait_for_timeout(1000)
def action_wait_for_seconds(self, selector, value):
self.page.wait_for_timeout(int(value) * 1000)
# @todo - in the future make some popout interface to capture what needs to be set
# https://playwright.dev/python/docs/api/class-keyboard
def action_press_enter(self, selector, value):
self.page.keyboard.press("Enter", delay=randint(200, 500))
def action_press_page_up(self, selector, value):
self.page.keyboard.press("PageUp", delay=randint(200, 500))
def action_press_page_down(self, selector, value):
self.page.keyboard.press("PageDown", delay=randint(200, 500))
def action_check_checkbox(self, selector, value):
self.page.locator(selector).check(timeout=1000)
def action_uncheck_checkbox(self, selector, value):
self.page.locator(selector, timeout=1000).uncheck(timeout=1000)
# Responsible for maintaining a live 'context' with browserless
# @todo - how long do contexts live for anyway?
class browsersteps_live_ui(steppable_browser_interface):
context = None
page = None
render_extra_delay = 1
stale = False
# bump and kill this if idle after X sec
age_start = 0
# use a special driver, maybe locally etc
command_executor = os.getenv(
"PLAYWRIGHT_BROWSERSTEPS_DRIVER_URL"
)
# if not..
if not command_executor:
command_executor = os.getenv(
"PLAYWRIGHT_DRIVER_URL",
'ws://playwright-chrome:3000'
).strip('"')
browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
def __init__(self, playwright_browser, proxy=None):
self.age_start = time.time()
self.playwright_browser = playwright_browser
if self.context is None:
self.connect(proxy=proxy)
# Connect and setup a new context
def connect(self, proxy=None):
# Should only get called once - test that
keep_open = 1000 * 60 * 5
now = time.time()
# @todo handle multiple contexts, bind a unique id from the browser on each req?
self.context = self.playwright_browser.new_context(
# @todo
# user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
# proxy=self.proxy,
# This is needed to enable JavaScript execution on GitHub and others
bypass_csp=True,
# Should never be needed
accept_downloads=False,
proxy=proxy
)
self.page = self.context.new_page()
# self.page.set_default_navigation_timeout(keep_open)
self.page.set_default_timeout(keep_open)
# @todo probably this doesnt work
self.page.on(
"close",
self.mark_as_closed,
)
# Listen for all console events and handle errors
self.page.on("console", lambda msg: print(f"Browser steps console - {msg.type}: {msg.text} {msg.args}"))
print("Time to browser setup", time.time() - now)
self.page.wait_for_timeout(1 * 1000)
def mark_as_closed(self):
print("Page closed, cleaning up..")
@property
def has_expired(self):
if not self.page:
return True
def get_current_state(self):
"""Return the screenshot and interactive elements mapping, generally always called after action_()"""
from pkg_resources import resource_string
xpath_element_js = resource_string(__name__, "../../res/xpath_element_scraper.js").decode('utf-8')
now = time.time()
self.page.wait_for_timeout(1 * 1000)
# The actual screenshot
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=40)
self.page.evaluate("var include_filters=''")
# Go find the interactive elements
# @todo in the future, something smarter that can scan for elements with .click/focus etc event handlers?
elements = 'a,button,input,select,textarea,i,th,td,p,li,h1,h2,h3,h4,div,span'
xpath_element_js = xpath_element_js.replace('%ELEMENTS%', elements)
xpath_data = self.page.evaluate("async () => {" + xpath_element_js + "}")
# So the JS will find the smallest one first
xpath_data['size_pos'] = sorted(xpath_data['size_pos'], key=lambda k: k['width'] * k['height'], reverse=True)
print("Time to complete get_current_state of browser", time.time() - now)
# except
# playwright._impl._api_types.Error: Browser closed.
# @todo show some countdown timer?
return (screenshot, xpath_data)
def request_visualselector_data(self):
"""
Does the same that the playwright operation in content_fetcher does
This is used to just bump the VisualSelector data so it' ready to go if they click on the tab
@todo refactor and remove duplicate code, add include_filters
:param xpath_data:
:param screenshot:
:param current_include_filters:
:return:
"""
self.page.evaluate("var include_filters=''")
from pkg_resources import resource_string
# The code that scrapes elements and makes a list of elements/size/position to click on in the VisualSelector
xpath_element_js = resource_string(__name__, "../../res/xpath_element_scraper.js").decode('utf-8')
from changedetectionio.content_fetcher import visualselector_xpath_selectors
xpath_element_js = xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors)
xpath_data = self.page.evaluate("async () => {" + xpath_element_js + "}")
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72)))
return (screenshot, xpath_data)

View File

@@ -0,0 +1,18 @@
from playwright.sync_api import PlaywrightContextManager
import asyncio
# So playwright wants to run as a context manager, but we do something horrible and hacky
# we are holding the session open for as long as possible, then shutting it down, and opening a new one
# So it means we don't get to use PlaywrightContextManager' __enter__ __exit__
# To work around this, make goodbye() act the same as the __exit__()
#
# But actually I think this is because the context is opened correctly with __enter__() but we timeout the connection
# then theres some lock condition where we cant destroy it without it hanging
class c_PlaywrightContextManager(PlaywrightContextManager):
def goodbye(self) -> None:
self.__exit__()
def c_sync_playwright() -> PlaywrightContextManager:
return c_PlaywrightContextManager()

View File

@@ -0,0 +1,33 @@
from distutils.util import strtobool
from flask import Blueprint, flash, redirect, url_for
from flask_login import login_required
from changedetectionio.store import ChangeDetectionStore
from changedetectionio import queuedWatchMetaData
from queue import PriorityQueue
PRICE_DATA_TRACK_ACCEPT = 'accepted'
PRICE_DATA_TRACK_REJECT = 'rejected'
def construct_blueprint(datastore: ChangeDetectionStore, update_q: PriorityQueue):
price_data_follower_blueprint = Blueprint('price_data_follower', __name__)
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/accept", methods=['GET'])
def accept(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_ACCEPT
update_q.put(queuedWatchMetaData.PrioritizedItem(priority=1, item={'uuid': uuid, 'skip_when_checksum_same': False}))
return redirect(url_for("form_watch_checknow", uuid=uuid))
@login_required
@price_data_follower_blueprint.route("/<string:uuid>/reject", methods=['GET'])
def reject(uuid):
datastore.data['watching'][uuid]['track_ldjson_price_data'] = PRICE_DATA_TRACK_REJECT
return redirect(url_for("index"))
return price_data_follower_blueprint

View File

@@ -0,0 +1,153 @@
#!/usr/bin/python3
# Launch as a eventlet.wsgi server instance.
from distutils.util import strtobool
from json.decoder import JSONDecodeError
import eventlet
import eventlet.wsgi
import getopt
import os
import signal
import socket
import sys
from . import store, changedetection_app, content_fetcher
from . import __version__
# Only global so we can access it in the signal handler
app = None
datastore = None
def sigterm_handler(_signo, _stack_frame):
global app
global datastore
# app.config.exit.set()
print('Shutdown: Got SIGTERM, DB saved to disk')
datastore.sync_to_json()
# raise SystemExit
def main():
global datastore
global app
datastore_path = None
do_cleanup = False
host = ''
ipv6_enabled = False
port = os.environ.get('PORT') or 5000
ssl_mode = False
# On Windows, create and use a default path.
if os.name == 'nt':
datastore_path = os.path.expandvars(r'%APPDATA%\changedetection.io')
os.makedirs(datastore_path, exist_ok=True)
else:
# Must be absolute so that send_from_directory doesnt try to make it relative to backend/
datastore_path = os.path.join(os.getcwd(), "../datastore")
try:
opts, args = getopt.getopt(sys.argv[1:], "6Ccsd:h:p:", "port")
except getopt.GetoptError:
print('backend.py -s SSL enable -h [host] -p [port] -d [datastore path]')
sys.exit(2)
create_datastore_dir = False
for opt, arg in opts:
if opt == '-s':
ssl_mode = True
if opt == '-h':
host = arg
if opt == '-p':
port = int(arg)
if opt == '-d':
datastore_path = arg
if opt == '-6':
print ("Enabling IPv6 listen support")
ipv6_enabled = True
# Cleanup (remove text files that arent in the index)
if opt == '-c':
do_cleanup = True
# Create the datadir if it doesnt exist
if opt == '-C':
create_datastore_dir = True
# isnt there some @thingy to attach to each route to tell it, that this route needs a datastore
app_config = {'datastore_path': datastore_path}
if not os.path.isdir(app_config['datastore_path']):
if create_datastore_dir:
os.mkdir(app_config['datastore_path'])
else:
print(
"ERROR: Directory path for the datastore '{}' does not exist, cannot start, please make sure the directory exists or specify a directory with the -d option.\n"
"Or use the -C parameter to create the directory.".format(app_config['datastore_path']), file=sys.stderr)
sys.exit(2)
try:
datastore = store.ChangeDetectionStore(datastore_path=app_config['datastore_path'], version_tag=__version__)
except JSONDecodeError as e:
# Dont' start if the JSON DB looks corrupt
print ("ERROR: JSON DB or Proxy List JSON at '{}' appears to be corrupt, aborting".format(app_config['datastore_path']))
print(str(e))
return
app = changedetection_app(app_config, datastore)
signal.signal(signal.SIGTERM, sigterm_handler)
# Go into cleanup mode
if do_cleanup:
datastore.remove_unused_snapshots()
app.config['datastore_path'] = datastore_path
@app.context_processor
def inject_version():
return dict(right_sticky="v{}".format(datastore.data['version_tag']),
new_version_available=app.config['NEW_VERSION_AVAILABLE'],
has_password=datastore.data['settings']['application']['password'] != False
)
# Monitored websites will not receive a Referer header when a user clicks on an outgoing link.
# @Note: Incompatible with password login (and maybe other features) for now, submit a PR!
@app.after_request
def hide_referrer(response):
if strtobool(os.getenv("HIDE_REFERER", 'false')):
response.headers["Referrer-Policy"] = "no-referrer"
return response
# Proxy sub-directory support
# Set environment var USE_X_SETTINGS=1 on this script
# And then in your proxy_pass settings
#
# proxy_set_header Host "localhost";
# proxy_set_header X-Forwarded-Prefix /app;
if os.getenv('USE_X_SETTINGS'):
print ("USE_X_SETTINGS is ENABLED\n")
from werkzeug.middleware.proxy_fix import ProxyFix
app.wsgi_app = ProxyFix(app.wsgi_app, x_prefix=1, x_host=1)
s_type = socket.AF_INET6 if ipv6_enabled else socket.AF_INET
if ssl_mode:
# @todo finalise SSL config, but this should get you in the right direction if you need it.
eventlet.wsgi.server(eventlet.wrap_ssl(eventlet.listen((host, port), s_type),
certfile='cert.pem',
keyfile='privkey.pem',
server_side=True), app)
else:
eventlet.wsgi.server(eventlet.listen((host, int(port)), s_type), app)

View File

@@ -0,0 +1,594 @@
import hashlib
from abc import abstractmethod
import chardet
import json
import logging
import os
import requests
import sys
import time
visualselector_xpath_selectors = 'div,span,form,table,tbody,tr,td,a,p,ul,li,h1,h2,h3,h4, header, footer, section, article, aside, details, main, nav, section, summary'
class Non200ErrorCodeReceived(Exception):
def __init__(self, status_code, url, screenshot=None, xpath_data=None, page_html=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.xpath_data = xpath_data
self.page_text = None
if page_html:
from changedetectionio import html_tools
self.page_text = html_tools.html_to_text(page_html)
return
class checksumFromPreviousCheckWasTheSame(Exception):
def __init__(self):
return
class JSActionExceptions(Exception):
def __init__(self, status_code, url, screenshot, message=''):
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.message = message
return
class BrowserStepsStepTimout(Exception):
def __init__(self, step_n):
self.step_n = step_n
return
class PageUnloadable(Exception):
def __init__(self, status_code, url, message, screenshot=False):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
self.message = message
return
class EmptyReply(Exception):
def __init__(self, status_code, url, screenshot=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
return
class ScreenshotUnavailable(Exception):
def __init__(self, status_code, url, page_html=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
if page_html:
from html_tools import html_to_text
self.page_text = html_to_text(page_html)
return
class ReplyWithContentButNoText(Exception):
def __init__(self, status_code, url, screenshot=None):
# Set this so we can use it in other parts of the app
self.status_code = status_code
self.url = url
self.screenshot = screenshot
return
class Fetcher():
error = None
status_code = None
content = None
headers = None
browser_steps = None
browser_steps_screenshot_path = None
fetcher_description = "No description"
webdriver_js_execute_code = None
xpath_element_js = ""
xpath_data = None
# Will be needed in the future by the VisualSelector, always get this where possible.
screenshot = False
system_http_proxy = os.getenv('HTTP_PROXY')
system_https_proxy = os.getenv('HTTPS_PROXY')
# Time ONTOP of the system defined env minimum time
render_extract_delay = 0
def __init__(self):
from pkg_resources import resource_string
# The code that scrapes elements and makes a list of elements/size/position to click on in the VisualSelector
self.xpath_element_js = resource_string(__name__, "res/xpath_element_scraper.js").decode('utf-8')
@abstractmethod
def get_error(self):
return self.error
@abstractmethod
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False):
# Should set self.error, self.status_code and self.content
pass
@abstractmethod
def quit(self):
return
@abstractmethod
def get_last_status_code(self):
return self.status_code
@abstractmethod
def screenshot_step(self, step_n):
return None
@abstractmethod
# Return true/false if this checker is ready to run, in the case it needs todo some special config check etc
def is_ready(self):
return True
def iterate_browser_steps(self):
from changedetectionio.blueprint.browser_steps.browser_steps import steppable_browser_interface
from playwright._impl._api_types import TimeoutError
from jinja2 import Environment
jinja2_env = Environment(extensions=['jinja2_time.TimeExtension'])
step_n = 0
if self.browser_steps is not None and len(self.browser_steps):
interface = steppable_browser_interface()
interface.page = self.page
valid_steps = filter(lambda s: (s['operation'] and len(s['operation']) and s['operation'] != 'Choose one' and s['operation'] != 'Goto site'), self.browser_steps)
for step in valid_steps:
step_n += 1
print(">> Iterating check - browser Step n {} - {}...".format(step_n, step['operation']))
self.screenshot_step("before-"+str(step_n))
self.save_step_html("before-"+str(step_n))
try:
optional_value = step['optional_value']
selector = step['selector']
# Support for jinja2 template in step values, with date module added
if '{%' in step['optional_value'] or '{{' in step['optional_value']:
optional_value = str(jinja2_env.from_string(step['optional_value']).render())
if '{%' in step['selector'] or '{{' in step['selector']:
selector = str(jinja2_env.from_string(step['selector']).render())
getattr(interface, "call_action")(action_name=step['operation'],
selector=selector,
optional_value=optional_value)
self.screenshot_step(step_n)
self.save_step_html(step_n)
except TimeoutError:
# Stop processing here
raise BrowserStepsStepTimout(step_n=step_n)
# It's always good to reset these
def delete_browser_steps_screenshots(self):
import glob
if self.browser_steps_screenshot_path is not None:
dest = os.path.join(self.browser_steps_screenshot_path, 'step_*.jpeg')
files = glob.glob(dest)
for f in files:
os.unlink(f)
# Maybe for the future, each fetcher provides its own diff output, could be used for text, image
# the current one would return javascript output (as we use JS to generate the diff)
#
def available_fetchers():
# See the if statement at the bottom of this file for how we switch between playwright and webdriver
import inspect
p = []
for name, obj in inspect.getmembers(sys.modules[__name__], inspect.isclass):
if inspect.isclass(obj):
# @todo html_ is maybe better as fetcher_ or something
# In this case, make sure to edit the default one in store.py and fetch_site_status.py
if name.startswith('html_'):
t = tuple([name, obj.fetcher_description])
p.append(t)
return p
class base_html_playwright(Fetcher):
fetcher_description = "Playwright {}/Javascript".format(
os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').capitalize()
)
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
fetcher_description += " via '{}'".format(os.getenv("PLAYWRIGHT_DRIVER_URL"))
browser_type = ''
command_executor = ''
# Configs for Proxy setup
# In the ENV vars, is prefixed with "playwright_proxy_", so it is for example "playwright_proxy_server"
playwright_proxy_settings_mappings = ['bypass', 'server', 'username', 'password']
proxy = None
def __init__(self, proxy_override=None):
super().__init__()
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.browser_type = os.getenv("PLAYWRIGHT_BROWSER_TYPE", 'chromium').strip('"')
self.command_executor = os.getenv(
"PLAYWRIGHT_DRIVER_URL",
'ws://playwright-chrome:3000'
).strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.playwright_proxy_settings_mappings:
v = os.getenv('playwright_proxy_' + k, False)
if v:
proxy_args[k] = v.strip('"')
if proxy_args:
self.proxy = proxy_args
# allow per-watch proxy selection override
if proxy_override:
self.proxy = {'server': proxy_override}
if self.proxy:
# Playwright needs separate username and password values
from urllib.parse import urlparse
parsed = urlparse(self.proxy.get('server'))
if parsed.username:
self.proxy['username'] = parsed.username
self.proxy['password'] = parsed.password
def screenshot_step(self, step_n=''):
screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=85)
if self.browser_steps_screenshot_path is not None:
destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.jpeg'.format(step_n))
logging.debug("Saving step screenshot to {}".format(destination))
with open(destination, 'wb') as f:
f.write(screenshot)
def save_step_html(self, step_n):
content = self.page.content()
destination = os.path.join(self.browser_steps_screenshot_path, 'step_{}.html'.format(step_n))
logging.debug("Saving step HTML to {}".format(destination))
with open(destination, 'w') as f:
f.write(content)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False):
from playwright.sync_api import sync_playwright
import playwright._impl._api_types
self.delete_browser_steps_screenshots()
response = None
with sync_playwright() as p:
browser_type = getattr(p, self.browser_type)
# Seemed to cause a connection Exception even tho I can see it connect
# self.browser = browser_type.connect(self.command_executor, timeout=timeout*1000)
# 60,000 connection timeout only
browser = browser_type.connect_over_cdp(self.command_executor, timeout=60000)
# Set user agent to prevent Cloudflare from blocking the browser
# Use the default one configured in the App.py model that's passed from fetch_site_status.py
context = browser.new_context(
user_agent=request_headers['User-Agent'] if request_headers.get('User-Agent') else 'Mozilla/5.0',
proxy=self.proxy,
# This is needed to enable JavaScript execution on GitHub and others
bypass_csp=True,
# Should be `allow` or `block` - sites like YouTube can transmit large amounts of data via Service Workers
service_workers=os.getenv('PLAYWRIGHT_SERVICE_WORKERS', 'allow'),
# Should never be needed
accept_downloads=False
)
self.page = context.new_page()
if len(request_headers):
context.set_extra_http_headers(request_headers)
self.page.set_default_navigation_timeout(90000)
self.page.set_default_timeout(90000)
# Listen for all console events and handle errors
self.page.on("console", lambda msg: print(f"Playwright console: Watch URL: {url} {msg.type}: {msg.text} {msg.args}"))
# Goto page
try:
# Wait_until = commit
# - `'commit'` - consider operation to be finished when network response is received and the document started loading.
# Better to not use any smarts from Playwright and just wait an arbitrary number of seconds
# This seemed to solve nearly all 'TimeoutErrors'
response = self.page.goto(url, wait_until='commit')
except playwright._impl._api_types.Error as e:
# Retry once - https://github.com/browserless/chrome/issues/2485
# Sometimes errors related to invalid cert's and other can be random
print ("Content Fetcher > retrying request got error - ", str(e))
time.sleep(1)
response = self.page.goto(url, wait_until='commit')
except Exception as e:
print ("Content Fetcher > Other exception when page.goto", str(e))
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
# Execute any browser steps
try:
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
self.page.wait_for_timeout(extra_wait * 1000)
if self.webdriver_js_execute_code is not None and len(self.webdriver_js_execute_code):
self.page.evaluate(self.webdriver_js_execute_code)
except playwright._impl._api_types.TimeoutError as e:
context.close()
browser.close()
# This can be ok, we will try to grab what we could retrieve
pass
except Exception as e:
print ("Content Fetcher > Other exception when executing custom JS code", str(e))
context.close()
browser.close()
raise PageUnloadable(url=url, status_code=None, message=str(e))
if response is None:
context.close()
browser.close()
print ("Content Fetcher > Response object was none")
raise EmptyReply(url=url, status_code=None)
# Run Browser Steps here
self.iterate_browser_steps()
extra_wait = int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay
time.sleep(extra_wait)
self.content = self.page.content()
self.status_code = response.status
if len(self.page.content().strip()) == 0:
context.close()
browser.close()
print ("Content Fetcher > Content was empty")
raise EmptyReply(url=url, status_code=response.status)
self.status_code = response.status
self.content = self.page.content()
self.headers = response.all_headers()
# So we can find an element on the page where its selector was entered manually (maybe not xPath etc)
if current_include_filters is not None:
self.page.evaluate("var include_filters={}".format(json.dumps(current_include_filters)))
else:
self.page.evaluate("var include_filters=''")
self.xpath_data = self.page.evaluate("async () => {" + self.xpath_element_js.replace('%ELEMENTS%', visualselector_xpath_selectors) + "}")
# Bug 3 in Playwright screenshot handling
# Some bug where it gives the wrong screenshot size, but making a request with the clip set first seems to solve it
# JPEG is better here because the screenshots can be very very large
# Screenshots also travel via the ws:// (websocket) meaning that the binary data is base64 encoded
# which will significantly increase the IO size between the server and client, it's recommended to use the lowest
# acceptable screenshot quality here
try:
# The actual screenshot
self.screenshot = self.page.screenshot(type='jpeg', full_page=True, quality=int(os.getenv("PLAYWRIGHT_SCREENSHOT_QUALITY", 72)))
except Exception as e:
context.close()
browser.close()
raise ScreenshotUnavailable(url=url, status_code=None)
context.close()
browser.close()
class base_html_webdriver(Fetcher):
if os.getenv("WEBDRIVER_URL"):
fetcher_description = "WebDriver Chrome/Javascript via '{}'".format(os.getenv("WEBDRIVER_URL"))
else:
fetcher_description = "WebDriver Chrome/Javascript"
command_executor = ''
# Configs for Proxy setup
# In the ENV vars, is prefixed with "webdriver_", so it is for example "webdriver_sslProxy"
selenium_proxy_settings_mappings = ['proxyType', 'ftpProxy', 'httpProxy', 'noProxy',
'proxyAutoconfigUrl', 'sslProxy', 'autodetect',
'socksProxy', 'socksVersion', 'socksUsername', 'socksPassword']
proxy = None
def __init__(self, proxy_override=None):
super().__init__()
from selenium.webdriver.common.proxy import Proxy as SeleniumProxy
# .strip('"') is going to save someone a lot of time when they accidently wrap the env value
self.command_executor = os.getenv("WEBDRIVER_URL", 'http://browser-chrome:4444/wd/hub').strip('"')
# If any proxy settings are enabled, then we should setup the proxy object
proxy_args = {}
for k in self.selenium_proxy_settings_mappings:
v = os.getenv('webdriver_' + k, False)
if v:
proxy_args[k] = v.strip('"')
# Map back standard HTTP_ and HTTPS_PROXY to webDriver httpProxy/sslProxy
if not proxy_args.get('webdriver_httpProxy') and self.system_http_proxy:
proxy_args['httpProxy'] = self.system_http_proxy
if not proxy_args.get('webdriver_sslProxy') and self.system_https_proxy:
proxy_args['httpsProxy'] = self.system_https_proxy
# Allows override the proxy on a per-request basis
if proxy_override is not None:
proxy_args['httpProxy'] = proxy_override
if proxy_args:
self.proxy = SeleniumProxy(raw=proxy_args)
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False):
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
from selenium.common.exceptions import WebDriverException
# request_body, request_method unused for now, until some magic in the future happens.
# check env for WEBDRIVER_URL
self.driver = webdriver.Remote(
command_executor=self.command_executor,
desired_capabilities=DesiredCapabilities.CHROME,
proxy=self.proxy)
try:
self.driver.get(url)
except WebDriverException as e:
# Be sure we close the session window
self.quit()
raise
self.driver.set_window_size(1280, 1024)
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
if self.webdriver_js_execute_code is not None:
self.driver.execute_script(self.webdriver_js_execute_code)
# Selenium doesn't automatically wait for actions as good as Playwright, so wait again
self.driver.implicitly_wait(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)))
# @todo - how to check this? is it possible?
self.status_code = 200
# @todo somehow we should try to get this working for WebDriver
# raise EmptyReply(url=url, status_code=r.status_code)
# @todo - dom wait loaded?
time.sleep(int(os.getenv("WEBDRIVER_DELAY_BEFORE_CONTENT_READY", 5)) + self.render_extract_delay)
self.content = self.driver.page_source
self.headers = {}
self.screenshot = self.driver.get_screenshot_as_png()
# Does the connection to the webdriver work? run a test connection.
def is_ready(self):
from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities
self.driver = webdriver.Remote(
command_executor=self.command_executor,
desired_capabilities=DesiredCapabilities.CHROME)
# driver.quit() seems to cause better exceptions
self.quit()
return True
def quit(self):
if self.driver:
try:
self.driver.quit()
except Exception as e:
print("Content Fetcher > Exception in chrome shutdown/quit" + str(e))
# "html_requests" is listed as the default fetcher in store.py!
class html_requests(Fetcher):
fetcher_description = "Basic fast Plaintext/HTTP Client"
def __init__(self, proxy_override=None):
self.proxy_override = proxy_override
def run(self,
url,
timeout,
request_headers,
request_body,
request_method,
ignore_status_codes=False,
current_include_filters=None,
is_binary=False):
# Make requests use a more modern looking user-agent
if not 'User-Agent' in request_headers:
request_headers['User-Agent'] = os.getenv("DEFAULT_SETTINGS_HEADERS_USERAGENT",
'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.66 Safari/537.36')
proxies = {}
# Allows override the proxy on a per-request basis
if self.proxy_override:
proxies = {'http': self.proxy_override, 'https': self.proxy_override, 'ftp': self.proxy_override}
else:
if self.system_http_proxy:
proxies['http'] = self.system_http_proxy
if self.system_https_proxy:
proxies['https'] = self.system_https_proxy
r = requests.request(method=request_method,
data=request_body,
url=url,
headers=request_headers,
timeout=timeout,
proxies=proxies,
verify=False)
# If the response did not tell us what encoding format to expect, Then use chardet to override what `requests` thinks.
# For example - some sites don't tell us it's utf-8, but return utf-8 content
# This seems to not occur when using webdriver/selenium, it seems to detect the text encoding more reliably.
# https://github.com/psf/requests/issues/1604 good info about requests encoding detection
if not is_binary:
# Don't run this for PDF (and requests identified as binary) takes a _long_ time
if not r.headers.get('content-type') or not 'charset=' in r.headers.get('content-type'):
encoding = chardet.detect(r.content)['encoding']
if encoding:
r.encoding = encoding
if not r.content or not len(r.content):
raise EmptyReply(url=url, status_code=r.status_code)
# @todo test this
# @todo maybe you really want to test zero-byte return pages?
if r.status_code != 200 and not ignore_status_codes:
# maybe check with content works?
raise Non200ErrorCodeReceived(url=url, status_code=r.status_code, page_html=r.text)
self.status_code = r.status_code
if is_binary:
# Binary files just return their checksum until we add something smarter
self.content = hashlib.md5(r.content).hexdigest()
else:
self.content = r.text
self.headers = r.headers
self.raw_content = r.content
# Decide which is the 'real' HTML webdriver, this is more a system wide config
# rather than site-specific.
use_playwright_as_chrome_fetcher = os.getenv('PLAYWRIGHT_DRIVER_URL', False)
if use_playwright_as_chrome_fetcher:
html_webdriver = base_html_playwright
else:
html_webdriver = base_html_webdriver

52
changedetectionio/diff.py Normal file
View File

@@ -0,0 +1,52 @@
# used for the notifications, the front-end is using a JS library
import difflib
def same_slicer(l, a, b):
if a == b:
return [l[a]]
else:
return l[a:b]
# like .compare but a little different output
def customSequenceMatcher(before, after, include_equal=False):
cruncher = difflib.SequenceMatcher(isjunk=lambda x: x in " \\t", a=before, b=after)
# @todo Line-by-line mode instead of buncghed, including `after` that is not in `before` (maybe unset?)
for tag, alo, ahi, blo, bhi in cruncher.get_opcodes():
if include_equal and tag == 'equal':
g = before[alo:ahi]
yield g
elif tag == 'delete':
g = ["(removed) " + i for i in same_slicer(before, alo, ahi)]
yield g
elif tag == 'replace':
g = ["(changed) " + i for i in same_slicer(before, alo, ahi)]
g += ["(into ) " + i for i in same_slicer(after, blo, bhi)]
yield g
elif tag == 'insert':
g = ["(added ) " + i for i in same_slicer(after, blo, bhi)]
yield g
# only_differences - only return info about the differences, no context
# line_feed_sep could be "<br/>" or "<li>" or "\n" etc
def render_diff(previous_file, newest_file, include_equal=False, line_feed_sep="\n"):
with open(newest_file, 'r') as f:
newest_version_file_contents = f.read()
newest_version_file_contents = [line.rstrip() for line in newest_version_file_contents.splitlines()]
if previous_file:
with open(previous_file, 'r') as f:
previous_version_file_contents = f.read()
previous_version_file_contents = [line.rstrip() for line in previous_version_file_contents.splitlines()]
else:
previous_version_file_contents = ""
rendered_diff = customSequenceMatcher(previous_version_file_contents,
newest_version_file_contents,
include_equal)
# Recursively join lists
f = lambda L: line_feed_sep.join([f(x) if type(x) is list else x for x in L])
return f(rendered_diff)

View File

@@ -0,0 +1,381 @@
import hashlib
import json
import logging
import os
import re
import urllib3
from changedetectionio import content_fetcher, html_tools
from changedetectionio.blueprint.price_data_follower import PRICE_DATA_TRACK_ACCEPT, PRICE_DATA_TRACK_REJECT
from copy import deepcopy
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
class FilterNotFoundInResponse(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
class PDFToHTMLToolNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Some common stuff here that can be moved to a base class
# (set_proxy_from_list)
class perform_site_check():
screenshot = None
xpath_data = None
def __init__(self, *args, datastore, **kwargs):
super().__init__(*args, **kwargs)
self.datastore = datastore
# Doesn't look like python supports forward slash auto enclosure in re.findall
# So convert it to inline flag "foobar(?i)" type configuration
def forward_slash_enclosed_regex_to_options(self, regex):
res = re.search(r'^/(.*?)/(\w+)$', regex, re.IGNORECASE)
if res:
regex = res.group(1)
regex += '(?{})'.format(res.group(2))
else:
regex += '(?{})'.format('i')
return regex
def run(self, uuid, skip_when_checksum_same=True):
changed_detected = False
screenshot = False # as bytes
stripped_text_from_html = ""
# DeepCopy so we can be sure we don't accidently change anything by reference
watch = deepcopy(self.datastore.data['watching'].get(uuid))
if not watch:
return
# Protect against file:// access
if re.search(r'^file', watch.get('url', ''), re.IGNORECASE) and not os.getenv('ALLOW_FILE_URI', False):
raise Exception(
"file:// type access is denied for security reasons."
)
# Unset any existing notification error
update_obj = {'last_notification_error': False, 'last_error': False}
extra_headers = watch.get('headers', [])
# Tweak the base config with the per-watch ones
request_headers = deepcopy(self.datastore.data['settings']['headers'])
request_headers.update(extra_headers)
# https://github.com/psf/requests/issues/4525
# Requests doesnt yet support brotli encoding, so don't put 'br' here, be totally sure that the user cannot
# do this by accident.
if 'Accept-Encoding' in request_headers and "br" in request_headers['Accept-Encoding']:
request_headers['Accept-Encoding'] = request_headers['Accept-Encoding'].replace(', br', '')
timeout = self.datastore.data['settings']['requests'].get('timeout')
url = watch.link
request_body = self.datastore.data['watching'][uuid].get('body')
request_method = self.datastore.data['watching'][uuid].get('method')
ignore_status_codes = self.datastore.data['watching'][uuid].get('ignore_status_codes', False)
# source: support
is_source = False
if url.startswith('source:'):
url = url.replace('source:', '')
is_source = True
# Pluggable content fetcher
prefer_backend = watch.get_fetch_backend
if not prefer_backend or prefer_backend == 'system':
prefer_backend = self.datastore.data['settings']['application']['fetch_backend']
if hasattr(content_fetcher, prefer_backend):
klass = getattr(content_fetcher, prefer_backend)
else:
# If the klass doesnt exist, just use a default
klass = getattr(content_fetcher, "html_requests")
proxy_id = self.datastore.get_preferred_proxy_for_watch(uuid=uuid)
proxy_url = None
if proxy_id:
proxy_url = self.datastore.proxy_list.get(proxy_id).get('url')
print("UUID {} Using proxy {}".format(uuid, proxy_url))
fetcher = klass(proxy_override=proxy_url)
# Configurable per-watch or global extra delay before extracting text (for webDriver types)
system_webdriver_delay = self.datastore.data['settings']['application'].get('webdriver_delay', None)
if watch['webdriver_delay'] is not None:
fetcher.render_extract_delay = watch.get('webdriver_delay')
elif system_webdriver_delay is not None:
fetcher.render_extract_delay = system_webdriver_delay
# Possible conflict
if prefer_backend == 'html_webdriver':
fetcher.browser_steps = watch.get('browser_steps', None)
fetcher.browser_steps_screenshot_path = os.path.join(self.datastore.datastore_path, uuid)
if watch.get('webdriver_js_execute_code') is not None and watch.get('webdriver_js_execute_code').strip():
fetcher.webdriver_js_execute_code = watch.get('webdriver_js_execute_code')
# requests for PDF's, images etc should be passwd the is_binary flag
is_binary = watch.is_pdf
fetcher.run(url, timeout, request_headers, request_body, request_method, ignore_status_codes, watch.get('include_filters'), is_binary=is_binary)
fetcher.quit()
self.screenshot = fetcher.screenshot
self.xpath_data = fetcher.xpath_data
# Track the content type
update_obj['content_type'] = fetcher.headers.get('Content-Type', '')
# Watches added automatically in the queue manager will skip if its the same checksum as the previous run
# Saves a lot of CPU
update_obj['previous_md5_before_filters'] = hashlib.md5(fetcher.content.encode('utf-8')).hexdigest()
if skip_when_checksum_same:
if update_obj['previous_md5_before_filters'] == watch.get('previous_md5_before_filters'):
raise content_fetcher.checksumFromPreviousCheckWasTheSame()
# Fetching complete, now filters
# @todo move to class / maybe inside of fetcher abstract base?
# @note: I feel like the following should be in a more obvious chain system
# - Check filter text
# - Is the checksum different?
# - Do we convert to JSON?
# https://stackoverflow.com/questions/41817578/basic-method-chaining ?
# return content().textfilter().jsonextract().checksumcompare() ?
is_json = 'application/json' in fetcher.headers.get('Content-Type', '')
is_html = not is_json
# source: support, basically treat it as plaintext
if is_source:
is_html = False
is_json = False
if watch.is_pdf or 'application/pdf' in fetcher.headers.get('Content-Type', '').lower():
from shutil import which
tool = os.getenv("PDF_TO_HTML_TOOL", "pdftohtml")
if not which(tool):
raise PDFToHTMLToolNotFound("Command-line `{}` tool was not found in system PATH, was it installed?".format(tool))
import subprocess
proc = subprocess.Popen(
[tool, '-stdout', '-', '-s', 'out.pdf', '-i'],
stdout=subprocess.PIPE,
stdin=subprocess.PIPE)
proc.stdin.write(fetcher.raw_content)
proc.stdin.close()
fetcher.content = proc.stdout.read().decode('utf-8')
proc.wait(timeout=60)
# Add a little metadata so we know if the file changes (like if an image changes, but the text is the same
# @todo may cause problems with non-UTF8?
metadata = "<p>Added by changedetection.io: Document checksum - {} Filesize - {} bytes</p>".format(
hashlib.md5(fetcher.raw_content).hexdigest().upper(),
len(fetcher.content))
fetcher.content = fetcher.content.replace('</body>', metadata + '</body>')
include_filters_rule = deepcopy(watch.get('include_filters', []))
# include_filters_rule = watch['include_filters']
subtractive_selectors = watch.get(
"subtractive_selectors", []
) + self.datastore.data["settings"]["application"].get(
"global_subtractive_selectors", []
)
# Inject a virtual LD+JSON price tracker rule
if watch.get('track_ldjson_price_data', '') == PRICE_DATA_TRACK_ACCEPT:
include_filters_rule.append(html_tools.LD_JSON_PRODUCT_OFFER_SELECTOR)
has_filter_rule = include_filters_rule and len("".join(include_filters_rule).strip())
has_subtractive_selectors = subtractive_selectors and len(subtractive_selectors[0].strip())
if is_json and not has_filter_rule:
include_filters_rule.append("json:$")
has_filter_rule = True
if is_json:
# Sort the JSON so we dont get false alerts when the content is just re-ordered
try:
fetcher.content = json.dumps(json.loads(fetcher.content), sort_keys=True)
except Exception as e:
# Might have just been a snippet, or otherwise bad JSON, continue
pass
if has_filter_rule:
json_filter_prefixes = ['json:', 'jq:']
for filter in include_filters_rule:
if any(prefix in filter for prefix in json_filter_prefixes):
stripped_text_from_html += html_tools.extract_json_as_string(content=fetcher.content, json_filter=filter)
is_html = False
if is_html or is_source:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
fetcher.content = html_tools.workarounds_for_obfuscations(fetcher.content)
html_content = fetcher.content
# If not JSON, and if it's not text/plain..
if 'text/plain' in fetcher.headers.get('Content-Type', '').lower():
# Don't run get_text or xpath/css filters on plaintext
stripped_text_from_html = html_content
else:
# Does it have some ld+json price data? used for easier monitoring
update_obj['has_ldjson_price_data'] = html_tools.has_ldjson_product_info(fetcher.content)
# Then we assume HTML
if has_filter_rule:
html_content = ""
for filter_rule in include_filters_rule:
# For HTML/XML we offer xpath as an option, just start a regular xPath "/.."
if filter_rule[0] == '/' or filter_rule.startswith('xpath:'):
html_content += html_tools.xpath_filter(xpath_filter=filter_rule.replace('xpath:', ''),
html_content=fetcher.content,
append_pretty_line_formatting=not is_source)
else:
# CSS Filter, extract the HTML that matches and feed that into the existing inscriptis::get_text
html_content += html_tools.include_filters(include_filters=filter_rule,
html_content=fetcher.content,
append_pretty_line_formatting=not is_source)
if not html_content.strip():
raise FilterNotFoundInResponse(include_filters_rule)
if has_subtractive_selectors:
html_content = html_tools.element_removal(subtractive_selectors, html_content)
if is_source:
stripped_text_from_html = html_content
else:
# extract text
do_anchor = self.datastore.data["settings"]["application"].get("render_anchor_tag_content", False)
stripped_text_from_html = \
html_tools.html_to_text(
html_content,
render_anchor_tag_content=do_anchor
)
# Re #340 - return the content before the 'ignore text' was applied
text_content_before_ignored_filter = stripped_text_from_html.encode('utf-8')
# Treat pages with no renderable text content as a change? No by default
empty_pages_are_a_change = self.datastore.data['settings']['application'].get('empty_pages_are_a_change', False)
if not is_json and not empty_pages_are_a_change and len(stripped_text_from_html.strip()) == 0:
raise content_fetcher.ReplyWithContentButNoText(url=url, status_code=fetcher.get_last_status_code(), screenshot=screenshot)
# We rely on the actual text in the html output.. many sites have random script vars etc,
# in the future we'll implement other mechanisms.
update_obj["last_check_status"] = fetcher.get_last_status_code()
# If there's text to skip
# @todo we could abstract out the get_text() to handle this cleaner
text_to_ignore = watch.get('ignore_text', []) + self.datastore.data['settings']['application'].get('global_ignore_text', [])
if len(text_to_ignore):
stripped_text_from_html = html_tools.strip_ignore_text(stripped_text_from_html, text_to_ignore)
else:
stripped_text_from_html = stripped_text_from_html.encode('utf8')
# 615 Extract text by regex
extract_text = watch.get('extract_text', [])
if len(extract_text) > 0:
regex_matched_output = []
for s_re in extract_text:
# incase they specified something in '/.../x'
regex = self.forward_slash_enclosed_regex_to_options(s_re)
result = re.findall(regex.encode('utf-8'), stripped_text_from_html)
for l in result:
if type(l) is tuple:
# @todo - some formatter option default (between groups)
regex_matched_output += list(l) + [b'\n']
else:
# @todo - some formatter option default (between each ungrouped result)
regex_matched_output += [l] + [b'\n']
# Now we will only show what the regex matched
stripped_text_from_html = b''
text_content_before_ignored_filter = b''
if regex_matched_output:
# @todo some formatter for presentation?
stripped_text_from_html = b''.join(regex_matched_output)
text_content_before_ignored_filter = stripped_text_from_html
# Re #133 - if we should strip whitespaces from triggering the change detected comparison
if self.datastore.data['settings']['application'].get('ignore_whitespace', False):
fetched_md5 = hashlib.md5(stripped_text_from_html.translate(None, b'\r\n\t ')).hexdigest()
else:
fetched_md5 = hashlib.md5(stripped_text_from_html).hexdigest()
############ Blocking rules, after checksum #################
blocked = False
trigger_text = watch.get('trigger_text', [])
if len(trigger_text):
# Assume blocked
blocked = True
# Filter and trigger works the same, so reuse it
# It should return the line numbers that match
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=trigger_text,
mode="line numbers")
# Unblock if the trigger was found
if result:
blocked = False
text_should_not_be_present = watch.get('text_should_not_be_present', [])
if len(text_should_not_be_present):
# If anything matched, then we should block a change from happening
result = html_tools.strip_ignore_text(content=str(stripped_text_from_html),
wordlist=text_should_not_be_present,
mode="line numbers")
if result:
blocked = True
# The main thing that all this at the moment comes down to :)
if watch.get('previous_md5') != fetched_md5:
changed_detected = True
# Looks like something changed, but did it match all the rules?
if blocked:
changed_detected = False
# Extract title as title
if is_html:
if self.datastore.data['settings']['application'].get('extract_title_as_title') or watch['extract_title_as_title']:
if not watch['title'] or not len(watch['title']):
update_obj['title'] = html_tools.extract_element(find='title', html_content=fetcher.content)
if changed_detected:
if watch.get('check_unique_lines', False):
has_unique_lines = watch.lines_contain_something_unique_compared_to_history(lines=stripped_text_from_html.splitlines())
# One or more lines? unsure?
if not has_unique_lines:
logging.debug("check_unique_lines: UUID {} didnt have anything new setting change_detected=False".format(uuid))
changed_detected = False
else:
logging.debug("check_unique_lines: UUID {} had unique content".format(uuid))
# Always record the new checksum
update_obj["previous_md5"] = fetched_md5
# On the first run of a site, watch['previous_md5'] will be None, set it the current one.
if not watch.get('previous_md5'):
watch['previous_md5'] = fetched_md5
return changed_detected, update_obj, text_content_before_ignored_filter

491
changedetectionio/forms.py Normal file
View File

@@ -0,0 +1,491 @@
import os
import re
from wtforms import (
BooleanField,
Form,
IntegerField,
RadioField,
SelectField,
StringField,
SubmitField,
TextAreaField,
fields,
validators,
widgets
)
from wtforms.fields import FieldList
from wtforms.validators import ValidationError
# default
# each select <option data-enabled="enabled-0-0"
from changedetectionio.blueprint.browser_steps.browser_steps import browser_step_ui_config
from changedetectionio import content_fetcher
from changedetectionio.notification import (
valid_notification_formats,
)
from wtforms.fields import FormField
valid_method = {
'GET',
'POST',
'PUT',
'PATCH',
'DELETE',
}
default_method = 'GET'
class StringListField(StringField):
widget = widgets.TextArea()
def _value(self):
if self.data:
# ignore empty lines in the storage
data = list(filter(lambda x: len(x.strip()), self.data))
# Apply strip to each line
data = list(map(lambda x: x.strip(), data))
return "\r\n".join(data)
else:
return u''
# incoming
def process_formdata(self, valuelist):
if valuelist and len(valuelist[0].strip()):
# Remove empty strings, stripping and splitting \r\n, only \n etc.
self.data = valuelist[0].splitlines()
# Remove empty lines from the final data
self.data = list(filter(lambda x: len(x.strip()), self.data))
else:
self.data = []
class SaltyPasswordField(StringField):
widget = widgets.PasswordInput()
encrypted_password = ""
def build_password(self, password):
import base64
import hashlib
import secrets
# Make a new salt on every new password and store it with the password
salt = secrets.token_bytes(32)
key = hashlib.pbkdf2_hmac('sha256', password.encode('utf-8'), salt, 100000)
store = base64.b64encode(salt + key).decode('ascii')
return store
# incoming
def process_formdata(self, valuelist):
if valuelist:
# Be really sure it's non-zero in length
if len(valuelist[0].strip()) > 0:
self.encrypted_password = self.build_password(valuelist[0])
self.data = ""
else:
self.data = False
class TimeBetweenCheckForm(Form):
weeks = IntegerField('Weeks', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
days = IntegerField('Days', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
hours = IntegerField('Hours', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
minutes = IntegerField('Minutes', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
seconds = IntegerField('Seconds', validators=[validators.Optional(), validators.NumberRange(min=0, message="Should contain zero or more seconds")])
# @todo add total seconds minimum validatior = minimum_seconds_recheck_time
# Separated by key:value
class StringDictKeyValue(StringField):
widget = widgets.TextArea()
def _value(self):
if self.data:
output = u''
for k in self.data.keys():
output += "{}: {}\r\n".format(k, self.data[k])
return output
else:
return u''
# incoming
def process_formdata(self, valuelist):
if valuelist:
self.data = {}
# Remove empty strings
cleaned = list(filter(None, valuelist[0].split("\n")))
for s in cleaned:
parts = s.strip().split(':', 1)
if len(parts) == 2:
self.data.update({parts[0].strip(): parts[1].strip()})
else:
self.data = {}
class ValidateContentFetcherIsReady(object):
"""
Validates that anything that looks like a regex passes as a regex
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
import urllib3.exceptions
from changedetectionio import content_fetcher
# Better would be a radiohandler that keeps a reference to each class
if field.data is not None and field.data != 'system':
klass = getattr(content_fetcher, field.data)
some_object = klass()
try:
ready = some_object.is_ready()
except urllib3.exceptions.MaxRetryError as e:
driver_url = some_object.command_executor
message = field.gettext('Content fetcher \'%s\' did not respond.' % (field.data))
message += '<br/>' + field.gettext(
'Be sure that the selenium/webdriver runner is running and accessible via network from this container/host.')
message += '<br/>' + field.gettext('Did you follow the instructions in the wiki?')
message += '<br/><br/>' + field.gettext('WebDriver Host: %s' % (driver_url))
message += '<br/><a href="https://github.com/dgtlmoon/changedetection.io/wiki/Fetching-pages-with-WebDriver">Go here for more information</a>'
message += '<br/>'+field.gettext('Content fetcher did not respond properly, unable to use it.\n %s' % (str(e)))
raise ValidationError(message)
except Exception as e:
message = field.gettext('Content fetcher \'%s\' did not respond properly, unable to use it.\n %s')
raise ValidationError(message % (field.data, e))
class ValidateNotificationBodyAndTitleWhenURLisSet(object):
"""
Validates that they entered something in both notification title+body when the URL is set
Due to https://github.com/dgtlmoon/changedetection.io/issues/360
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
if len(field.data):
if not len(form.notification_title.data) or not len(form.notification_body.data):
message = field.gettext('Notification Body and Title is required when a Notification URL is used')
raise ValidationError(message)
class ValidateAppRiseServers(object):
"""
Validates that each URL given is compatible with AppRise
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
import apprise
apobj = apprise.Apprise()
for server_url in field.data:
if not apobj.add(server_url):
message = field.gettext('\'%s\' is not a valid AppRise URL.' % (server_url))
raise ValidationError(message)
class ValidateJinja2Template(object):
"""
Validates that a {token} is from a valid set
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
from changedetectionio import notification
from jinja2 import Environment, BaseLoader, TemplateSyntaxError
from jinja2.meta import find_undeclared_variables
try:
jinja2_env = Environment(loader=BaseLoader)
jinja2_env.globals.update(notification.valid_tokens)
rendered = jinja2_env.from_string(field.data).render()
except TemplateSyntaxError as e:
raise ValidationError(f"This is not a valid Jinja2 template: {e}") from e
ast = jinja2_env.parse(field.data)
undefined = ", ".join(find_undeclared_variables(ast))
if undefined:
raise ValidationError(
f"The following tokens used in the notification are not valid: {undefined}"
)
class validateURL(object):
"""
Flask wtform validators wont work with basic auth
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
import validators
try:
validators.url(field.data.strip())
except validators.ValidationFailure:
message = field.gettext('\'%s\' is not a valid URL.' % (field.data.strip()))
raise ValidationError(message)
from .model.Watch import is_safe_url
if not is_safe_url(field.data):
raise ValidationError('Watch protocol is not permitted by SAFE_PROTOCOL_REGEX')
class ValidateListRegex(object):
"""
Validates that anything that looks like a regex passes as a regex
"""
def __init__(self, message=None):
self.message = message
def __call__(self, form, field):
for line in field.data:
if line[0] == '/' and line[-1] == '/':
# Because internally we dont wrap in /
line = line.strip('/')
try:
re.compile(line)
except re.error:
message = field.gettext('RegEx \'%s\' is not a valid regular expression.')
raise ValidationError(message % (line))
class ValidateCSSJSONXPATHInput(object):
"""
Filter validation
@todo CSS validator ;)
"""
def __init__(self, message=None, allow_xpath=True, allow_json=True):
self.message = message
self.allow_xpath = allow_xpath
self.allow_json = allow_json
def __call__(self, form, field):
if isinstance(field.data, str):
data = [field.data]
else:
data = field.data
for line in data:
# Nothing to see here
if not len(line.strip()):
return
# Does it look like XPath?
if line.strip()[0] == '/':
if not self.allow_xpath:
raise ValidationError("XPath not permitted in this field!")
from lxml import etree, html
tree = html.fromstring("<html></html>")
try:
tree.xpath(line.strip())
except etree.XPathEvalError as e:
message = field.gettext('\'%s\' is not a valid XPath expression. (%s)')
raise ValidationError(message % (line, str(e)))
except:
raise ValidationError("A system-error occurred when validating your XPath expression")
if 'json:' in line:
if not self.allow_json:
raise ValidationError("JSONPath not permitted in this field!")
from jsonpath_ng.exceptions import (
JsonPathLexerError,
JsonPathParserError,
)
from jsonpath_ng.ext import parse
input = line.replace('json:', '')
try:
parse(input)
except (JsonPathParserError, JsonPathLexerError) as e:
message = field.gettext('\'%s\' is not a valid JSONPath expression. (%s)')
raise ValidationError(message % (input, str(e)))
except:
raise ValidationError("A system-error occurred when validating your JSONPath expression")
# Re #265 - maybe in the future fetch the page and offer a
# warning/notice that its possible the rule doesnt yet match anything?
if not self.allow_json:
raise ValidationError("jq not permitted in this field!")
if 'jq:' in line:
try:
import jq
except ModuleNotFoundError:
# `jq` requires full compilation in windows and so isn't generally available
raise ValidationError("jq not support not found")
input = line.replace('jq:', '')
try:
jq.compile(input)
except (ValueError) as e:
message = field.gettext('\'%s\' is not a valid jq expression. (%s)')
raise ValidationError(message % (input, str(e)))
except:
raise ValidationError("A system-error occurred when validating your jq expression")
class quickWatchForm(Form):
url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()])
watch_submit_button = SubmitField('Watch', render_kw={"class": "pure-button pure-button-primary"})
edit_and_watch_submit_button = SubmitField('Edit > Watch', render_kw={"class": "pure-button pure-button-primary"})
# Common to a single watch and the global settings
class commonSettingsForm(Form):
notification_urls = StringListField('Notification URL List', validators=[validators.Optional(), ValidateAppRiseServers()])
notification_title = StringField('Notification Title', default='ChangeDetection.io Notification - {{ watch_url }}', validators=[validators.Optional(), ValidateJinja2Template()])
notification_body = TextAreaField('Notification Body', default='{{ watch_url }} had a change.', validators=[validators.Optional(), ValidateJinja2Template()])
notification_format = SelectField('Notification format', choices=valid_notification_formats.keys())
fetch_backend = RadioField(u'Fetch Method', choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
extract_title_as_title = BooleanField('Extract <title> from document and use as watch title', default=False)
webdriver_delay = IntegerField('Wait seconds before extracting text', validators=[validators.Optional(), validators.NumberRange(min=1,
message="Should contain one or more seconds")])
class SingleBrowserStep(Form):
operation = SelectField('Operation', [validators.Optional()], choices=browser_step_ui_config.keys())
# maybe better to set some <script>var..
selector = StringField('Selector', [validators.Optional()], render_kw={"placeholder": "CSS or xPath selector"})
optional_value = StringField('value', [validators.Optional()], render_kw={"placeholder": "Value"})
# @todo move to JS? ajax fetch new field?
# remove_button = SubmitField('-', render_kw={"type": "button", "class": "pure-button pure-button-primary", 'title': 'Remove'})
# add_button = SubmitField('+', render_kw={"type": "button", "class": "pure-button pure-button-primary", 'title': 'Add new step after'})
class watchForm(commonSettingsForm):
url = fields.URLField('URL', validators=[validateURL()])
tag = StringField('Group tag', [validators.Optional()], default='')
time_between_check = FormField(TimeBetweenCheckForm)
include_filters = StringListField('CSS/JSONPath/JQ/XPath Filters', [ValidateCSSJSONXPATHInput()], default='')
subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
extract_text = StringListField('Extract text', [ValidateListRegex()])
title = StringField('Title', default='')
ignore_text = StringListField('Ignore text', [ValidateListRegex()])
headers = StringDictKeyValue('Request headers')
body = TextAreaField('Request body', [validators.Optional()])
method = SelectField('Request method', choices=valid_method, default=default_method)
ignore_status_codes = BooleanField('Ignore status codes (process non-2xx status codes as normal)', default=False)
check_unique_lines = BooleanField('Only trigger when new lines appear', default=False)
trigger_text = StringListField('Trigger/wait for text', [validators.Optional(), ValidateListRegex()])
if os.getenv("PLAYWRIGHT_DRIVER_URL"):
browser_steps = FieldList(FormField(SingleBrowserStep), min_entries=10)
text_should_not_be_present = StringListField('Block change-detection if text matches', [validators.Optional(), ValidateListRegex()])
webdriver_js_execute_code = TextAreaField('Execute JavaScript before change detection', render_kw={"rows": "5"}, validators=[validators.Optional()])
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})
proxy = RadioField('Proxy')
filter_failure_notification_send = BooleanField(
'Send a notification when the filter can no longer be found on the page', default=False)
notification_muted = BooleanField('Notifications Muted / Off', default=False)
notification_screenshot = BooleanField('Attach screenshot to notification (where possible)', default=False)
def validate(self, **kwargs):
if not super().validate():
return False
result = True
# Fail form validation when a body is set for a GET
if self.method.data == 'GET' and self.body.data:
self.body.errors.append('Body must be empty when Request Method is set to GET')
result = False
# Attempt to validate jinja2 templates in the URL
from jinja2 import Environment
# Jinja2 available in URLs along with https://pypi.org/project/jinja2-time/
jinja2_env = Environment(extensions=['jinja2_time.TimeExtension'])
try:
ready_url = str(jinja2_env.from_string(self.url.data).render())
except Exception as e:
self.url.errors.append('Invalid template syntax')
result = False
return result
class SingleExtraProxy(Form):
# maybe better to set some <script>var..
proxy_name = StringField('Name', [validators.Optional()], render_kw={"placeholder": "Name"})
proxy_url = StringField('Proxy URL', [validators.Optional()], render_kw={"placeholder": "http://user:pass@...:3128", "size":50})
# @todo do the validation here instead
# datastore.data['settings']['requests']..
class globalSettingsRequestForm(Form):
time_between_check = FormField(TimeBetweenCheckForm)
proxy = RadioField('Proxy')
jitter_seconds = IntegerField('Random jitter seconds ± check',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0, message="Should contain zero or more seconds")])
extra_proxies = FieldList(FormField(SingleExtraProxy), min_entries=5)
def validate_extra_proxies(self, extra_validators=None):
for e in self.data['extra_proxies']:
if e.get('proxy_name') or e.get('proxy_url'):
if not e.get('proxy_name','').strip() or not e.get('proxy_url','').strip():
self.extra_proxies.errors.append('Both a name, and a Proxy URL is required.')
return False
# datastore.data['settings']['application']..
class globalSettingsApplicationForm(commonSettingsForm):
api_access_token_enabled = BooleanField('API access token security check enabled', default=True, validators=[validators.Optional()])
base_url = StringField('Base URL', validators=[validators.Optional()])
empty_pages_are_a_change = BooleanField('Treat empty pages as a change?', default=False)
fetch_backend = RadioField('Fetch Method', default="html_requests", choices=content_fetcher.available_fetchers(), validators=[ValidateContentFetcherIsReady()])
global_ignore_text = StringListField('Ignore Text', [ValidateListRegex()])
global_subtractive_selectors = StringListField('Remove elements', [ValidateCSSJSONXPATHInput(allow_xpath=False, allow_json=False)])
ignore_whitespace = BooleanField('Ignore whitespace')
password = SaltyPasswordField()
removepassword_button = SubmitField('Remove password', render_kw={"class": "pure-button pure-button-primary"})
render_anchor_tag_content = BooleanField('Render anchor tag content', default=False)
shared_diff_access = BooleanField('Allow access to view diff page when password is enabled', default=False, validators=[validators.Optional()])
filter_failure_notification_threshold_attempts = IntegerField('Number of times the filter can be missing before sending a notification',
render_kw={"style": "width: 5em;"},
validators=[validators.NumberRange(min=0,
message="Should contain zero or more attempts")])
class globalSettingsForm(Form):
# Define these as FormFields/"sub forms", this way it matches the JSON storage
# datastore.data['settings']['application']..
# datastore.data['settings']['requests']..
requests = FormField(globalSettingsRequestForm)
application = FormField(globalSettingsApplicationForm)
save_button = SubmitField('Save', render_kw={"class": "pure-button pure-button-primary"})
class extractDataForm(Form):
extract_regex = StringField('RegEx to extract', validators=[validators.Length(min=1, message="Needs a RegEx")])
extract_submit_button = SubmitField('Extract as CSV', render_kw={"class": "pure-button pure-button-primary"})

View File

@@ -0,0 +1,289 @@
from bs4 import BeautifulSoup
from inscriptis import get_text
from inscriptis.model.config import ParserConfig
from jsonpath_ng.ext import parse
from typing import List
import json
import re
# HTML added to be sure each result matching a filter (.example) gets converted to a new line by Inscriptis
TEXT_FILTER_LIST_LINE_SUFFIX = "<br/>"
# 'price' , 'lowPrice', 'highPrice' are usually under here
# all of those may or may not appear on different websites
LD_JSON_PRODUCT_OFFER_SELECTOR = "json:$..offers"
class JSONNotFound(ValueError):
def __init__(self, msg):
ValueError.__init__(self, msg)
# Given a CSS Rule, and a blob of HTML, return the blob of HTML that matches
def include_filters(include_filters, html_content, append_pretty_line_formatting=False):
soup = BeautifulSoup(html_content, "html.parser")
html_block = ""
r = soup.select(include_filters, separator="")
for element in r:
# When there's more than 1 match, then add the suffix to separate each line
# And where the matched result doesn't include something that will cause Inscriptis to add a newline
# (This way each 'match' reliably has a new-line in the diff)
# Divs are converted to 4 whitespaces by inscriptis
if append_pretty_line_formatting and len(html_block) and not element.name in (['br', 'hr', 'div', 'p']):
html_block += TEXT_FILTER_LIST_LINE_SUFFIX
html_block += str(element)
return html_block
def subtractive_css_selector(css_selector, html_content):
soup = BeautifulSoup(html_content, "html.parser")
for item in soup.select(css_selector):
item.decompose()
return str(soup)
def element_removal(selectors: List[str], html_content):
"""Joins individual filters into one css filter."""
selector = ",".join(selectors)
return subtractive_css_selector(selector, html_content)
# Return str Utf-8 of matched rules
def xpath_filter(xpath_filter, html_content, append_pretty_line_formatting=False):
from lxml import etree, html
tree = html.fromstring(bytes(html_content, encoding='utf-8'))
html_block = ""
r = tree.xpath(xpath_filter.strip(), namespaces={'re': 'http://exslt.org/regular-expressions'})
#@note: //title/text() wont work where <title>CDATA..
for element in r:
# When there's more than 1 match, then add the suffix to separate each line
# And where the matched result doesn't include something that will cause Inscriptis to add a newline
# (This way each 'match' reliably has a new-line in the diff)
# Divs are converted to 4 whitespaces by inscriptis
if append_pretty_line_formatting and len(html_block) and (not hasattr( element, 'tag' ) or not element.tag in (['br', 'hr', 'div', 'p'])):
html_block += TEXT_FILTER_LIST_LINE_SUFFIX
if type(element) == etree._ElementStringResult:
html_block += str(element)
elif type(element) == etree._ElementUnicodeResult:
html_block += str(element)
else:
html_block += etree.tostring(element, pretty_print=True).decode('utf-8')
return html_block
# Extract/find element
def extract_element(find='title', html_content=''):
#Re #106, be sure to handle when its not found
element_text = None
soup = BeautifulSoup(html_content, 'html.parser')
result = soup.find(find)
if result and result.string:
element_text = result.string.strip()
return element_text
#
def _parse_json(json_data, json_filter):
if 'json:' in json_filter:
jsonpath_expression = parse(json_filter.replace('json:', ''))
match = jsonpath_expression.find(json_data)
return _get_stripped_text_from_json_match(match)
if 'jq:' in json_filter:
try:
import jq
except ModuleNotFoundError:
# `jq` requires full compilation in windows and so isn't generally available
raise Exception("jq not support not found")
jq_expression = jq.compile(json_filter.replace('jq:', ''))
match = jq_expression.input(json_data).all()
return _get_stripped_text_from_json_match(match)
def _get_stripped_text_from_json_match(match):
s = []
# More than one result, we will return it as a JSON list.
if len(match) > 1:
for i in match:
s.append(i.value if hasattr(i, 'value') else i)
# Single value, use just the value, as it could be later used in a token in notifications.
if len(match) == 1:
s = match[0].value if hasattr(match[0], 'value') else match[0]
# Re #257 - Better handling where it does not exist, in the case the original 's' value was False..
if not match:
# Re 265 - Just return an empty string when filter not found
return ''
# Ticket #462 - allow the original encoding through, usually it's UTF-8 or similar
stripped_text_from_html = json.dumps(s, indent=4, ensure_ascii=False)
return stripped_text_from_html
# content - json
# json_filter - ie json:$..price
# ensure_is_ldjson_info_type - str "product", optional, "@type == product" (I dont know how to do that as a json selector)
def extract_json_as_string(content, json_filter, ensure_is_ldjson_info_type=None):
stripped_text_from_html = False
# Try to parse/filter out the JSON, if we get some parser error, then maybe it's embedded <script type=ldjson>
try:
stripped_text_from_html = _parse_json(json.loads(content), json_filter)
except json.JSONDecodeError:
# Foreach <script json></script> blob.. just return the first that matches json_filter
s = []
soup = BeautifulSoup(content, 'html.parser')
if ensure_is_ldjson_info_type:
bs_result = soup.findAll('script', {"type": "application/ld+json"})
else:
bs_result = soup.findAll('script')
if not bs_result:
raise JSONNotFound("No parsable JSON found in this document")
for result in bs_result:
# Skip empty tags, and things that dont even look like JSON
if not result.string or not '{' in result.string:
continue
try:
json_data = json.loads(result.string)
except json.JSONDecodeError:
# Just skip it
continue
else:
stripped_text_from_html = _parse_json(json_data, json_filter)
if ensure_is_ldjson_info_type:
# Could sometimes be list, string or something else random
if isinstance(json_data, dict):
# If it has LD JSON 'key' @type, and @type is 'product', and something was found for the search
# (Some sites have multiple of the same ld+json @type='product', but some have the review part, some have the 'price' part)
if json_data.get('@type', False) and json_data.get('@type','').lower() == ensure_is_ldjson_info_type.lower() and stripped_text_from_html:
break
elif stripped_text_from_html:
break
if not stripped_text_from_html:
# Re 265 - Just return an empty string when filter not found
return ''
return stripped_text_from_html
# Mode - "content" return the content without the matches (default)
# - "line numbers" return a list of line numbers that match (int list)
#
# wordlist - list of regex's (str) or words (str)
def strip_ignore_text(content, wordlist, mode="content"):
ignore = []
ignore_regex = []
# @todo check this runs case insensitive
for k in wordlist:
# Is it a regex?
if k[0] == '/':
ignore_regex.append(k.strip(" /"))
else:
ignore.append(k)
i = 0
output = []
ignored_line_numbers = []
for line in content.splitlines():
i += 1
# Always ignore blank lines in this mode. (when this function gets called)
if len(line.strip()):
regex_matches = False
# if any of these match, skip
for regex in ignore_regex:
try:
if re.search(regex, line, re.IGNORECASE):
regex_matches = True
except Exception as e:
continue
if not regex_matches and not any(skip_text.lower() in line.lower() for skip_text in ignore):
output.append(line.encode('utf8'))
else:
ignored_line_numbers.append(i)
# Used for finding out what to highlight
if mode == "line numbers":
return ignored_line_numbers
return "\n".encode('utf8').join(output)
def html_to_text(html_content: str, render_anchor_tag_content=False) -> str:
"""Converts html string to a string with just the text. If ignoring
rendering anchor tag content is enable, anchor tag content are also
included in the text
:param html_content: string with html content
:param render_anchor_tag_content: boolean flag indicating whether to extract
hyperlinks (the anchor tag content) together with text. This refers to the
'href' inside 'a' tags.
Anchor tag content is rendered in the following manner:
'[ text ](anchor tag content)'
:return: extracted text from the HTML
"""
# if anchor tag content flag is set to True define a config for
# extracting this content
if render_anchor_tag_content:
parser_config = ParserConfig(
annotation_rules={"a": ["hyperlink"]}, display_links=True
)
# otherwise set config to None
else:
parser_config = None
# get text and annotations via inscriptis
text_content = get_text(html_content, config=parser_config)
return text_content
# Does LD+JSON exist with a @type=='product' and a .price set anywhere?
def has_ldjson_product_info(content):
try:
pricing_data = extract_json_as_string(content=content, json_filter=LD_JSON_PRODUCT_OFFER_SELECTOR, ensure_is_ldjson_info_type="product")
except JSONNotFound as e:
# Totally fine
return False
x=bool(pricing_data)
return x
def workarounds_for_obfuscations(content):
"""
Some sites are using sneaky tactics to make prices and other information un-renderable by Inscriptis
This could go into its own Pip package in the future, for faster updates
"""
# HomeDepot.com style <span>$<!-- -->90<!-- -->.<!-- -->74</span>
# https://github.com/weblyzard/inscriptis/issues/45
if not content:
return content
content = re.sub('<!--\s+-->', '', content)
return content

View File

@@ -0,0 +1,130 @@
from abc import ABC, abstractmethod
import time
import validators
class Importer():
remaining_data = []
new_uuids = []
good = 0
def __init__(self):
self.new_uuids = []
self.good = 0
self.remaining_data = []
@abstractmethod
def run(self,
data,
flash,
datastore):
pass
class import_url_list(Importer):
"""
Imports a list, can be in <code>https://example.com tag1, tag2, last tag</code> format
"""
def run(self,
data,
flash,
datastore,
):
urls = data.split("\n")
good = 0
now = time.time()
if (len(urls) > 5000):
flash("Importing 5,000 of the first URLs from your list, the rest can be imported again.")
for url in urls:
url = url.strip()
if not len(url):
continue
tags = ""
# 'tags' should be a csv list after the URL
if ' ' in url:
url, tags = url.split(" ", 1)
# Flask wtform validators wont work with basic auth, use validators package
# Up to 5000 per batch so we dont flood the server
if len(url) and validators.url(url.replace('source:', '')) and good < 5000:
new_uuid = datastore.add_watch(url=url.strip(), tag=tags, write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
continue
# Worked past the 'continue' above, append it to the bad list
if self.remaining_data is None:
self.remaining_data = []
self.remaining_data.append(url)
flash("{} Imported from list in {:.2f}s, {} Skipped.".format(good, time.time() - now, len(self.remaining_data)))
class import_distill_io_json(Importer):
def run(self,
data,
flash,
datastore,
):
import json
good = 0
now = time.time()
self.new_uuids=[]
try:
data = json.loads(data.strip())
except json.decoder.JSONDecodeError:
flash("Unable to read JSON file, was it broken?", 'error')
return
if not data.get('data'):
flash("JSON structure looks invalid, was it broken?", 'error')
return
for d in data.get('data'):
d_config = json.loads(d['config'])
extras = {'title': d.get('name', None)}
if len(d['uri']) and good < 5000:
try:
# @todo we only support CSS ones at the moment
if d_config['selections'][0]['frames'][0]['excludes'][0]['type'] == 'css':
extras['subtractive_selectors'] = d_config['selections'][0]['frames'][0]['excludes'][0]['expr']
except KeyError:
pass
except IndexError:
pass
extras['include_filters'] = []
try:
if d_config['selections'][0]['frames'][0]['includes'][0]['type'] == 'xpath':
extras['include_filters'].append('xpath:' + d_config['selections'][0]['frames'][0]['includes'][0]['expr'])
else:
extras['include_filters'].append(d_config['selections'][0]['frames'][0]['includes'][0]['expr'])
except KeyError:
pass
except IndexError:
pass
if d.get('tags', False):
extras['tag'] = ", ".join(d['tags'])
new_uuid = datastore.add_watch(url=d['uri'].strip(),
extras=extras,
write_to_disk_now=False)
if new_uuid:
# Straight into the queue.
self.new_uuids.append(new_uuid)
good += 1
flash("{} Imported from Distill.io in {:.2f}s, {} Skipped.".format(len(self.new_uuids), time.time() - now, len(self.remaining_data)))

View File

@@ -0,0 +1,51 @@
from os import getenv
from changedetectionio.notification import (
default_notification_body,
default_notification_format,
default_notification_title,
)
_FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT = 6
class model(dict):
base_config = {
'note': "Hello! If you change this file manually, please be sure to restart your changedetection.io instance!",
'watching': {},
'settings': {
'headers': {
},
'requests': {
'extra_proxies': [], # Configurable extra proxies via the UI
'jitter_seconds': 0,
'proxy': None, # Preferred proxy connection
'time_between_check': {'weeks': None, 'days': None, 'hours': 3, 'minutes': None, 'seconds': None},
'timeout': int(getenv("DEFAULT_SETTINGS_REQUESTS_TIMEOUT", "45")), # Default 45 seconds
'workers': int(getenv("DEFAULT_SETTINGS_REQUESTS_WORKERS", "10")), # Number of threads, lower is better for slow connections
},
'application': {
'api_access_token_enabled': True,
'password': False,
'base_url' : None,
'extract_title_as_title': False,
'empty_pages_are_a_change': False,
'fetch_backend': getenv("DEFAULT_FETCH_BACKEND", "html_requests"),
'filter_failure_notification_threshold_attempts': _FILTER_FAILURE_THRESHOLD_ATTEMPTS_DEFAULT,
'global_ignore_text': [], # List of text to ignore when calculating the comparison checksum
'global_subtractive_selectors': [],
'ignore_whitespace': True,
'render_anchor_tag_content': False,
'notification_urls': [], # Apprise URL list
# Custom notification content
'notification_title': default_notification_title,
'notification_body': default_notification_body,
'notification_format': default_notification_format,
'schema_version' : 0,
'shared_diff_access': False,
'webdriver_delay': None # Extra delay in seconds before extracting text
}
}
}
def __init__(self, *arg, **kw):
super(model, self).__init__(*arg, **kw)
self.update(self.base_config)

View File

@@ -0,0 +1,433 @@
from distutils.util import strtobool
import logging
import os
import re
import time
import uuid
# Allowable protocols, protects against javascript: etc
# file:// is further checked by ALLOW_FILE_URI
SAFE_PROTOCOL_REGEX='^(http|https|ftp|file):'
minimum_seconds_recheck_time = int(os.getenv('MINIMUM_SECONDS_RECHECK_TIME', 60))
mtable = {'seconds': 1, 'minutes': 60, 'hours': 3600, 'days': 86400, 'weeks': 86400 * 7}
from changedetectionio.notification import (
default_notification_format_for_watch
)
base_config = {
'body': None,
'check_unique_lines': False, # On change-detected, compare against all history if its something new
'check_count': 0,
'consecutive_filter_failures': 0, # Every time the CSS/xPath filter cannot be located, reset when all is fine.
'extract_text': [], # Extract text by regex after filters
'extract_title_as_title': False,
'fetch_backend': 'system',
'filter_failure_notification_send': strtobool(os.getenv('FILTER_FAILURE_NOTIFICATION_SEND_DEFAULT', 'True')),
'has_ldjson_price_data': None,
'track_ldjson_price_data': None,
'headers': {}, # Extra headers to send
'ignore_text': [], # List of text to ignore when calculating the comparison checksum
'include_filters': [],
'last_checked': 0,
'last_error': False,
'last_viewed': 0, # history key value of the last viewed via the [diff] link
'method': 'GET',
# Custom notification content
'notification_body': None,
'notification_format': default_notification_format_for_watch,
'notification_muted': False,
'notification_title': None,
'notification_screenshot': False, # Include the latest screenshot if available and supported by the apprise URL
'notification_urls': [], # List of URLs to add to the notification Queue (Usually AppRise)
'paused': False,
'previous_md5': False,
'previous_md5_before_filters': False, # Used for skipping changedetection entirely
'proxy': None, # Preferred proxy connection
'subtractive_selectors': [],
'tag': None,
'text_should_not_be_present': [], # Text that should not present
# Re #110, so then if this is set to None, we know to use the default value instead
# Requires setting to None on submit if it's the same as the default
# Should be all None by default, so we use the system default in this case.
'time_between_check': {'weeks': None, 'days': None, 'hours': None, 'minutes': None, 'seconds': None},
'title': None,
'trigger_text': [], # List of text or regex to wait for until a change is detected
'url': '',
'uuid': str(uuid.uuid4()),
'webdriver_delay': None,
'webdriver_js_execute_code': None, # Run before change-detection
}
def is_safe_url(test_url):
# See https://github.com/dgtlmoon/changedetection.io/issues/1358
# Remove 'source:' prefix so we dont get 'source:javascript:' etc
# 'source:' is a valid way to tell us to return the source
r = re.compile(re.escape('source:'), re.IGNORECASE)
test_url = r.sub('', test_url)
pattern = re.compile(os.getenv('SAFE_PROTOCOL_REGEX', SAFE_PROTOCOL_REGEX), re.IGNORECASE)
if not pattern.match(test_url.strip()):
return False
return True
class model(dict):
__newest_history_key = None
__history_n = 0
jitter_seconds = 0
def __init__(self, *arg, **kw):
self.update(base_config)
self.__datastore_path = kw['datastore_path']
self['uuid'] = str(uuid.uuid4())
del kw['datastore_path']
if kw.get('default'):
self.update(kw['default'])
del kw['default']
# Be sure the cached timestamp is ready
bump = self.history
# Goes at the end so we update the default object with the initialiser
super(model, self).__init__(*arg, **kw)
@property
def viewed(self):
if int(self['last_viewed']) >= int(self.newest_history_key) :
return True
return False
def ensure_data_dir_exists(self):
if not os.path.isdir(self.watch_data_dir):
print ("> Creating data dir {}".format(self.watch_data_dir))
os.mkdir(self.watch_data_dir)
@property
def link(self):
url = self.get('url', '')
if not is_safe_url(url):
return 'DISABLED'
ready_url = url
if '{%' in url or '{{' in url:
from jinja2 import Environment
# Jinja2 available in URLs along with https://pypi.org/project/jinja2-time/
jinja2_env = Environment(extensions=['jinja2_time.TimeExtension'])
try:
ready_url = str(jinja2_env.from_string(url).render())
except Exception as e:
from flask import (
flash, Markup, url_for
)
message = Markup('<a href="{}#general">The URL {} is invalid and cannot be used, click to edit</a>'.format(
url_for('edit_page', uuid=self.get('uuid')), self.get('url', '')))
flash(message, 'error')
return ''
return ready_url
@property
def get_fetch_backend(self):
"""
Like just using the `fetch_backend` key but there could be some logic
:return:
"""
# Maybe also if is_image etc?
# This is because chrome/playwright wont render the PDF in the browser and we will just fetch it and use pdf2html to see the text.
if self.is_pdf:
return 'html_requests'
return self.get('fetch_backend')
@property
def is_pdf(self):
# content_type field is set in the future
# https://github.com/dgtlmoon/changedetection.io/issues/1392
# Not sure the best logic here
return self.get('url', '').lower().endswith('.pdf') or 'pdf' in self.get('content_type', '').lower()
@property
def label(self):
# Used for sorting
if self['title']:
return self['title']
return self['url']
@property
def last_changed(self):
# last_changed will be the newest snapshot, but when we have just one snapshot, it should be 0
if self.__history_n <= 1:
return 0
if self.__newest_history_key:
return int(self.__newest_history_key)
return 0
@property
def history_n(self):
return self.__history_n
@property
def history(self):
"""History index is just a text file as a list
{watch-uuid}/history.txt
contains a list like
{epoch-time},{filename}\n
We read in this list as the history information
"""
tmp_history = {}
# Read the history file as a dict
fname = os.path.join(self.watch_data_dir, "history.txt")
if os.path.isfile(fname):
logging.debug("Reading history index " + str(time.time()))
with open(fname, "r") as f:
for i in f.readlines():
if ',' in i:
k, v = i.strip().split(',', 2)
# The index history could contain a relative path, so we need to make the fullpath
# so that python can read it
if not '/' in v and not '\'' in v:
v = os.path.join(self.watch_data_dir, v)
else:
# It's possible that they moved the datadir on older versions
# So the snapshot exists but is in a different path
snapshot_fname = v.split('/')[-1]
proposed_new_path = os.path.join(self.watch_data_dir, snapshot_fname)
if not os.path.exists(v) and os.path.exists(proposed_new_path):
v = proposed_new_path
tmp_history[k] = v
if len(tmp_history):
self.__newest_history_key = list(tmp_history.keys())[-1]
self.__history_n = len(tmp_history)
return tmp_history
@property
def has_history(self):
fname = os.path.join(self.watch_data_dir, "history.txt")
return os.path.isfile(fname)
# Returns the newest key, but if theres only 1 record, then it's counted as not being new, so return 0.
@property
def newest_history_key(self):
if self.__newest_history_key is not None:
return self.__newest_history_key
if len(self.history) <= 1:
return 0
bump = self.history
return self.__newest_history_key
# Save some text file to the appropriate path and bump the history
# result_obj from fetch_site_status.run()
def save_history_text(self, contents, timestamp):
self.ensure_data_dir_exists()
# Small hack so that we sleep just enough to allow 1 second between history snapshots
# this is because history.txt indexes/keys snapshots by epoch seconds and we dont want dupe keys
if self.__newest_history_key and int(timestamp) == int(self.__newest_history_key):
time.sleep(timestamp - self.__newest_history_key)
snapshot_fname = "{}.txt".format(str(uuid.uuid4()))
# in /diff/ and /preview/ we are going to assume for now that it's UTF-8 when reading
# most sites are utf-8 and some are even broken utf-8
with open(os.path.join(self.watch_data_dir, snapshot_fname), 'wb') as f:
f.write(contents)
f.close()
# Append to index
# @todo check last char was \n
index_fname = os.path.join(self.watch_data_dir, "history.txt")
with open(index_fname, 'a') as f:
f.write("{},{}\n".format(timestamp, snapshot_fname))
f.close()
self.__newest_history_key = timestamp
self.__history_n += 1
# @todo bump static cache of the last timestamp so we dont need to examine the file to set a proper ''viewed'' status
return snapshot_fname
@property
def has_empty_checktime(self):
# using all() + dictionary comprehension
# Check if all values are 0 in dictionary
res = all(x == None or x == False or x==0 for x in self.get('time_between_check', {}).values())
return res
def threshold_seconds(self):
seconds = 0
for m, n in mtable.items():
x = self.get('time_between_check', {}).get(m, None)
if x:
seconds += x * n
return seconds
# Iterate over all history texts and see if something new exists
def lines_contain_something_unique_compared_to_history(self, lines: list):
local_lines = set([l.decode('utf-8').strip().lower() for l in lines])
# Compare each lines (set) against each history text file (set) looking for something new..
existing_history = set({})
for k, v in self.history.items():
alist = set([line.decode('utf-8').strip().lower() for line in open(v, 'rb')])
existing_history = existing_history.union(alist)
# Check that everything in local_lines(new stuff) already exists in existing_history - it should
# if not, something new happened
return not local_lines.issubset(existing_history)
def get_screenshot(self):
fname = os.path.join(self.watch_data_dir, "last-screenshot.png")
if os.path.isfile(fname):
return fname
# False is not an option for AppRise, must be type None
return None
def get_screenshot_as_jpeg(self):
# Created by save_screenshot()
fname = os.path.join(self.watch_data_dir, "last-screenshot.jpg")
if os.path.isfile(fname):
return fname
# False is not an option for AppRise, must be type None
return None
def __get_file_ctime(self, filename):
fname = os.path.join(self.watch_data_dir, filename)
if os.path.isfile(fname):
return int(os.path.getmtime(fname))
return False
@property
def error_text_ctime(self):
return self.__get_file_ctime('last-error.txt')
@property
def snapshot_text_ctime(self):
if self.history_n==0:
return False
timestamp = list(self.history.keys())[-1]
return int(timestamp)
@property
def snapshot_screenshot_ctime(self):
return self.__get_file_ctime('last-screenshot.png')
@property
def snapshot_error_screenshot_ctime(self):
return self.__get_file_ctime('last-error-screenshot.png')
@property
def watch_data_dir(self):
# The base dir of the watch data
return os.path.join(self.__datastore_path, self['uuid'])
def get_error_text(self):
"""Return the text saved from a previous request that resulted in a non-200 error"""
fname = os.path.join(self.watch_data_dir, "last-error.txt")
if os.path.isfile(fname):
with open(fname, 'r') as f:
return f.read()
return False
def get_error_snapshot(self):
"""Return path to the screenshot that resulted in a non-200 error"""
fname = os.path.join(self.watch_data_dir, "last-error-screenshot.png")
if os.path.isfile(fname):
return fname
return False
def pause(self):
self['paused'] = True
def unpause(self):
self['paused'] = False
def toggle_pause(self):
self['paused'] ^= True
def mute(self):
self['notification_muted'] = True
def unmute(self):
self['notification_muted'] = False
def toggle_mute(self):
self['notification_muted'] ^= True
def extract_regex_from_all_history(self, regex):
import csv
import re
import datetime
csv_output_filename = False
csv_writer = False
f = None
# self.history will be keyed with the full path
for k, fname in self.history.items():
if os.path.isfile(fname):
with open(fname, "r") as f:
contents = f.read()
res = re.findall(regex, contents, re.MULTILINE)
if res:
if not csv_writer:
# A file on the disk can be transferred much faster via flask than a string reply
csv_output_filename = 'report.csv'
f = open(os.path.join(self.watch_data_dir, csv_output_filename), 'w')
# @todo some headers in the future
#fieldnames = ['Epoch seconds', 'Date']
csv_writer = csv.writer(f,
delimiter=',',
quotechar='"',
quoting=csv.QUOTE_MINIMAL,
#fieldnames=fieldnames
)
csv_writer.writerow(['Epoch seconds', 'Date'])
# csv_writer.writeheader()
date_str = datetime.datetime.fromtimestamp(int(k)).strftime('%Y-%m-%d %H:%M:%S')
for r in res:
row = [k, date_str]
if isinstance(r, str):
row.append(r)
else:
row+=r
csv_writer.writerow(row)
if f:
f.close()
return csv_output_filename
@property
# Return list of tags, stripped and lowercase, used for searching
def all_tags(self):
return [s.strip().lower() for s in self.get('tag','').split(',')]

View File

View File

@@ -0,0 +1,223 @@
import apprise
from jinja2 import Environment, BaseLoader
from apprise import NotifyFormat
import json
valid_tokens = {
'base_url': '',
'watch_url': '',
'watch_uuid': '',
'watch_title': '',
'watch_tag': '',
'diff': '',
'diff_full': '',
'diff_url': '',
'preview_url': '',
'current_snapshot': ''
}
default_notification_format_for_watch = 'System default'
default_notification_format = 'Text'
default_notification_body = '{{watch_url}} had a change.\n---\n{{diff}}\n---\n'
default_notification_title = 'ChangeDetection.io Notification - {{watch_url}}'
valid_notification_formats = {
'Text': NotifyFormat.TEXT,
'Markdown': NotifyFormat.MARKDOWN,
'HTML': NotifyFormat.HTML,
# Used only for editing a watch (not for global)
default_notification_format_for_watch: default_notification_format_for_watch
}
# include the decorator
from apprise.decorators import notify
@notify(on="delete")
@notify(on="deletes")
@notify(on="get")
@notify(on="gets")
@notify(on="post")
@notify(on="posts")
@notify(on="put")
@notify(on="puts")
def apprise_custom_api_call_wrapper(body, title, notify_type, *args, **kwargs):
import requests
url = kwargs['meta'].get('url')
if url.startswith('post'):
r = requests.post
elif url.startswith('get'):
r = requests.get
elif url.startswith('put'):
r = requests.put
elif url.startswith('delete'):
r = requests.delete
url = url.replace('post://', 'http://')
url = url.replace('posts://', 'https://')
url = url.replace('put://', 'http://')
url = url.replace('puts://', 'https://')
url = url.replace('get://', 'http://')
url = url.replace('gets://', 'https://')
url = url.replace('put://', 'http://')
url = url.replace('puts://', 'https://')
url = url.replace('delete://', 'http://')
url = url.replace('deletes://', 'https://')
# Try to auto-guess if it's JSON
headers = {}
try:
json.loads(body)
headers = {'Content-Type': 'application/json; charset=utf-8'}
except ValueError as e:
pass
r(url, headers=headers, data=body)
def process_notification(n_object, datastore):
# Insert variables into the notification content
notification_parameters = create_notification_parameters(n_object, datastore)
# Get the notification body from datastore
jinja2_env = Environment(loader=BaseLoader)
n_body = jinja2_env.from_string(n_object.get('notification_body', default_notification_body)).render(**notification_parameters)
n_title = jinja2_env.from_string(n_object.get('notification_title', default_notification_title)).render(**notification_parameters)
n_format = valid_notification_formats.get(
n_object['notification_format'],
valid_notification_formats[default_notification_format],
)
# https://github.com/caronc/apprise/wiki/Development_LogCapture
# Anything higher than or equal to WARNING (which covers things like Connection errors)
# raise it as an exception
apobjs=[]
sent_objs=[]
from .apprise_asset import asset
for url in n_object['notification_urls']:
url = jinja2_env.from_string(url).render(**notification_parameters)
apobj = apprise.Apprise(debug=True, asset=asset)
url = url.strip()
if len(url):
print(">> Process Notification: AppRise notifying {}".format(url))
with apprise.LogCapture(level=apprise.logging.DEBUG) as logs:
# Re 323 - Limit discord length to their 2000 char limit total or it wont send.
# Because different notifications may require different pre-processing, run each sequentially :(
# 2000 bytes minus -
# 200 bytes for the overhead of the _entire_ json payload, 200 bytes for {tts, wait, content} etc headers
# Length of URL - Incase they specify a longer custom avatar_url
# So if no avatar_url is specified, add one so it can be correctly calculated into the total payload
k = '?' if not '?' in url else '&'
if not 'avatar_url' in url \
and not url.startswith('mail') \
and not url.startswith('post') \
and not url.startswith('get') \
and not url.startswith('delete') \
and not url.startswith('put'):
url += k + 'avatar_url=https://raw.githubusercontent.com/dgtlmoon/changedetection.io/master/changedetectionio/static/images/avatar-256x256.png'
if url.startswith('tgram://'):
# Telegram only supports a limit subset of HTML, remove the '<br/>' we place in.
# re https://github.com/dgtlmoon/changedetection.io/issues/555
# @todo re-use an existing library we have already imported to strip all non-allowed tags
n_body = n_body.replace('<br/>', '\n')
n_body = n_body.replace('</br>', '\n')
# real limit is 4096, but minus some for extra metadata
payload_max_size = 3600
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('discord://') or url.startswith('https://discordapp.com/api/webhooks') or url.startswith('https://discord.com/api'):
# real limit is 2000, but minus some for extra metadata
payload_max_size = 1700
body_limit = max(0, payload_max_size - len(n_title))
n_title = n_title[0:payload_max_size]
n_body = n_body[0:body_limit]
elif url.startswith('mailto'):
# Apprise will default to HTML, so we need to override it
# So that whats' generated in n_body is in line with what is going to be sent.
# https://github.com/caronc/apprise/issues/633#issuecomment-1191449321
if not 'format=' in url and (n_format == 'text' or n_format == 'markdown'):
prefix = '?' if not '?' in url else '&'
url = "{}{}format={}".format(url, prefix, n_format)
apobj.add(url)
apobj.notify(
title=n_title,
body=n_body,
body_format=n_format,
# False is not an option for AppRise, must be type None
attach=n_object.get('screenshot', None)
)
apobj.clear()
# Incase it needs to exist in memory for a while after to process(?)
apobjs.append(apobj)
# Returns empty string if nothing found, multi-line string otherwise
log_value = logs.getvalue()
if log_value and 'WARNING' in log_value or 'ERROR' in log_value:
raise Exception(log_value)
sent_objs.append({'title': n_title,
'body': n_body,
'url' : url,
'body_format': n_format})
# Return what was sent for better logging - after the for loop
return sent_objs
# Notification title + body content parameters get created here.
def create_notification_parameters(n_object, datastore):
from copy import deepcopy
# in the case we send a test notification from the main settings, there is no UUID.
uuid = n_object['uuid'] if 'uuid' in n_object else ''
if uuid != '':
watch_title = datastore.data['watching'][uuid]['title']
watch_tag = datastore.data['watching'][uuid]['tag']
else:
watch_title = 'Change Detection'
watch_tag = ''
# Create URLs to customise the notification with
base_url = datastore.data['settings']['application']['base_url']
watch_url = n_object['watch_url']
# Re #148 - Some people have just {{ base_url }} in the body or title, but this may break some notification services
# like 'Join', so it's always best to atleast set something obvious so that they are not broken.
if base_url == '':
base_url = "<base-url-env-var-not-set>"
diff_url = "{}/diff/{}".format(base_url, uuid)
preview_url = "{}/preview/{}".format(base_url, uuid)
# Not sure deepcopy is needed here, but why not
tokens = deepcopy(valid_tokens)
# Valid_tokens also used as a field validator
tokens.update(
{
'base_url': base_url if base_url is not None else '',
'watch_url': watch_url,
'watch_uuid': uuid,
'watch_title': watch_title if watch_title is not None else '',
'watch_tag': watch_tag if watch_tag is not None else '',
'diff_url': diff_url,
'diff': n_object.get('diff', ''), # Null default in the case we use a test
'diff_full': n_object.get('diff_full', ''), # Null default in the case we use a test
'preview_url': preview_url,
'current_snapshot': n_object['current_snapshot'] if 'current_snapshot' in n_object else ''
})
return tokens

View File

@@ -1,7 +1,7 @@
[pytest]
addopts = --no-start-live-server --live-server-port=5005
#testpaths = tests pytest_invenio
#live_server_scope = session
#live_server_scope = function
filterwarnings =
ignore::DeprecationWarning:urllib3.*:

View File

@@ -0,0 +1,10 @@
from dataclasses import dataclass, field
from typing import Any
# So that we can queue some metadata in `item`
# https://docs.python.org/3/library/queue.html#queue.PriorityQueue
#
@dataclass(order=True)
class PrioritizedItem:
priority: int
item: Any=field(compare=False)

View File

@@ -0,0 +1,215 @@
// Copyright (C) 2021 Leigh Morresi (dgtlmoon@gmail.com)
// All rights reserved.
// @file Scrape the page looking for elements of concern (%ELEMENTS%)
// http://matatk.agrip.org.uk/tests/position-and-width/
// https://stackoverflow.com/questions/26813480/when-is-element-getboundingclientrect-guaranteed-to-be-updated-accurate
//
// Some pages like https://www.londonstockexchange.com/stock/NCCL/ncondezi-energy-limited/analysis
// will automatically force a scroll somewhere, so include the position offset
// Lets hope the position doesnt change while we iterate the bbox's, but this is better than nothing
var scroll_y=+document.documentElement.scrollTop || document.body.scrollTop
// Include the getXpath script directly, easier than fetching
function getxpath(e) {
var n = e;
if (n && n.id) return '//*[@id="' + n.id + '"]';
for (var o = []; n && Node.ELEMENT_NODE === n.nodeType;) {
for (var i = 0, r = !1, d = n.previousSibling; d;) d.nodeType !== Node.DOCUMENT_TYPE_NODE && d.nodeName === n.nodeName && i++, d = d.previousSibling;
for (d = n.nextSibling; d;) {
if (d.nodeName === n.nodeName) {
r = !0;
break
}
d = d.nextSibling
}
o.push((n.prefix ? n.prefix + ":" : "") + n.localName + (i || r ? "[" + (i + 1) + "]" : "")), n = n.parentNode
}
return o.length ? "/" + o.reverse().join("/") : ""
}
const findUpTag = (el) => {
let r = el
chained_css = [];
depth = 0;
// Strategy 1: If it's an input, with name, and there's only one, prefer that
if (el.name !== undefined && el.name.length) {
var proposed = el.tagName + "[name=" + el.name + "]";
var proposed_element = window.document.querySelectorAll(proposed);
if(proposed_element.length) {
if (proposed_element.length === 1) {
return proposed;
} else {
// Some sites change ID but name= stays the same, we can hit it if we know the index
// Find all the elements that match and work out the input[n]
var n=Array.from(proposed_element).indexOf(el);
// Return a Playwright selector for nthinput[name=zipcode]
return proposed+" >> nth="+n;
}
}
}
// Strategy 2: Keep going up until we hit an ID tag, imagine it's like #list-widget div h4
while (r.parentNode) {
if (depth == 5) {
break;
}
if ('' !== r.id) {
chained_css.unshift("#" + CSS.escape(r.id));
final_selector = chained_css.join(' > ');
// Be sure theres only one, some sites have multiples of the same ID tag :-(
if (window.document.querySelectorAll(final_selector).length == 1) {
return final_selector;
}
return null;
} else {
chained_css.unshift(r.tagName.toLowerCase());
}
r = r.parentNode;
depth += 1;
}
return null;
}
// @todo - if it's SVG or IMG, go into image diff mode
// %ELEMENTS% replaced at injection time because different interfaces use it with different settings
var elements = window.document.querySelectorAll("%ELEMENTS%");
var size_pos = [];
// after page fetch, inject this JS
// build a map of all elements and their positions (maybe that only include text?)
var bbox;
for (var i = 0; i < elements.length; i++) {
bbox = elements[i].getBoundingClientRect();
// Exclude items that are not interactable or visible
if(elements[i].style.opacity === "0") {
continue
}
if(elements[i].style.display === "none" || elements[i].style.pointerEvents === "none" ) {
continue
}
// Skip really small ones, and where width or height ==0
if (bbox['width'] * bbox['height'] < 100) {
continue;
}
// Don't include elements that are offset from canvas
if (bbox['top']+scroll_y < 0 || bbox['left'] < 0) {
continue;
}
// @todo the getXpath kind of sucks, it doesnt know when there is for example just one ID sometimes
// it should not traverse when we know we can anchor off just an ID one level up etc..
// maybe, get current class or id, keep traversing up looking for only class or id until there is just one match
// 1st primitive - if it has class, try joining it all and select, if theres only one.. well thats us.
xpath_result = false;
try {
var d = findUpTag(elements[i]);
if (d) {
xpath_result = d;
}
} catch (e) {
console.log(e);
}
// You could swap it and default to getXpath and then try the smarter one
// default back to the less intelligent one
if (!xpath_result) {
try {
// I've seen on FB and eBay that this doesnt work
// ReferenceError: getXPath is not defined at eval (eval at evaluate (:152:29), <anonymous>:67:20) at UtilityScript.evaluate (<anonymous>:159:18) at UtilityScript.<anonymous> (<anonymous>:1:44)
xpath_result = getxpath(elements[i]);
} catch (e) {
console.log(e);
continue;
}
}
if (window.getComputedStyle(elements[i]).visibility === "hidden") {
continue;
}
// @todo Possible to ONLY list where it's clickable to save JSON xfer size
size_pos.push({
xpath: xpath_result,
width: Math.round(bbox['width']),
height: Math.round(bbox['height']),
left: Math.floor(bbox['left']),
top: Math.floor(bbox['top'])+scroll_y,
tagName: (elements[i].tagName) ? elements[i].tagName.toLowerCase() : '',
tagtype: (elements[i].tagName == 'INPUT' && elements[i].type) ? elements[i].type.toLowerCase() : '',
isClickable: (elements[i].onclick) || window.getComputedStyle(elements[i]).cursor == "pointer"
});
}
// Inject the current one set in the include_filters, which may be a CSS rule
// used for displaying the current one in VisualSelector, where its not one we generated.
if (include_filters.length) {
// Foreach filter, go and find it on the page and add it to the results so we can visualise it again
for (const f of include_filters) {
bbox = false;
q = false;
if (!f.length) {
console.log("xpath_element_scraper: Empty filter, skipping");
continue;
}
try {
// is it xpath?
if (f.startsWith('/') || f.startsWith('xpath:')) {
q = document.evaluate(f.replace('xpath:', ''), document, null, XPathResult.FIRST_ORDERED_NODE_TYPE, null).singleNodeValue;
} else {
q = document.querySelector(f);
}
} catch (e) {
// Maybe catch DOMException and alert?
console.log("xpath_element_scraper: Exception selecting element from filter "+f);
console.log(e);
}
if (q) {
// #1231 - IN the case XPath attribute filter is applied, we will have to traverse up and find the element.
if (q.hasOwnProperty('getBoundingClientRect')) {
bbox = q.getBoundingClientRect();
console.log("xpath_element_scraper: Got filter element, scroll from top was " + scroll_y)
} else {
try {
// Try and see we can find its ownerElement
bbox = q.ownerElement.getBoundingClientRect();
console.log("xpath_element_scraper: Got filter by ownerElement element, scroll from top was " + scroll_y)
} catch (e) {
console.log("xpath_element_scraper: error looking up ownerElement")
}
}
}
if(!q) {
console.log("xpath_element_scraper: filter element " + f + " was not found");
}
if (bbox && bbox['width'] > 0 && bbox['height'] > 0) {
size_pos.push({
xpath: f,
width: parseInt(bbox['width']),
height: parseInt(bbox['height']),
left: parseInt(bbox['left']),
top: parseInt(bbox['top'])+scroll_y
});
}
}
}
// Sort the elements so we find the smallest one first, in other words, we find the smallest one matching in that area
// so that we dont select the wrapping element by mistake and be unable to select what we want
size_pos.sort((a, b) => (a.width*a.height > b.width*b.height) ? 1 : -1)
// Window.width required for proper scaling in the frontend
return {'size_pos': size_pos, 'browser_width': window.innerWidth};

View File

@@ -0,0 +1,30 @@
#!/bin/bash
# live_server will throw errors even with live_server_scope=function if I have the live_server setup in different functions
# and I like to restart the server for each test (and have the test cleanup after each test)
# merge request welcome :)
# exit when any command fails
set -e
SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
find tests/test_*py -type f|while read test_name
do
echo "TEST RUNNING $test_name"
pytest $test_name
done
echo "RUNNING WITH BASE_URL SET"
# Now re-run some tests with BASE_URL enabled
# Re #65 - Ability to include a link back to the installation, in the notification.
export BASE_URL="https://really-unique-domain.io"
pytest tests/test_notification.py
# Re-run with HIDE_REFERER set - could affect login
export HIDE_REFERER=True
pytest tests/test_access_control.py

View File

@@ -0,0 +1,61 @@
#!/bin/bash
# exit when any command fails
set -e
# Test proxy list handling, starting two squids on different ports
# Each squid adds a different header to the response, which is the main thing we test for.
docker run --network changedet-network -d --name squid-one --hostname squid-one --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
docker run --network changedet-network -d --name squid-two --hostname squid-two --rm -v `pwd`/tests/proxy_list/squid.conf:/etc/squid/conf.d/debian.conf ubuntu/squid:4.13-21.10_edge
# Used for configuring a custom proxy URL via the UI
docker run --network changedet-network -d \
--name squid-custom \
--hostname squid-custom \
--rm \
-v `pwd`/tests/proxy_list/squid-auth.conf:/etc/squid/conf.d/debian.conf \
-v `pwd`/tests/proxy_list/squid-passwords.txt:/etc/squid3/passwords \
ubuntu/squid:4.13-21.10_edge
## 2nd test actually choose the preferred proxy from proxies.json
docker run --network changedet-network \
-v `pwd`/tests/proxy_list/proxies.json-example:/app/changedetectionio/test-datastore/proxies.json \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_multiple_proxy.py'
## Should be a request in the default "first" squid
docker logs squid-one 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid one)"
exit 1
fi
# And one in the 'second' squid (user selects this as preferred)
docker logs squid-two 2>/dev/null|grep chosen.changedetection.io
if [ $? -ne 0 ]
then
echo "Did not see a request to chosen.changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
# Test the UI configurable proxies
docker run --network changedet-network \
test-changedetectionio \
bash -c 'cd changedetectionio && pytest tests/proxy_list/test_select_custom_proxy.py'
# Should see a request for one.changedetection.io in there
docker logs squid-custom 2>/dev/null|grep "TCP_TUNNEL.200.*changedetection.io"
if [ $? -ne 0 ]
then
echo "Did not see a valid request to changedetection.io in the squid logs (while checking preferred proxy - squid two)"
exit 1
fi
docker kill squid-one squid-two squid-custom

Binary file not shown.

After

Width:  |  Height:  |  Size: 33 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 40 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 KiB

View File

@@ -0,0 +1,9 @@
<?xml version="1.0" encoding="utf-8"?>
<browserconfig>
<msapplication>
<tile>
<square150x150logo src="favicons/mstile-150x150.png"/>
<TileColor>#da532c</TileColor>
</tile>
</msapplication>
</browserconfig>

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 KiB

View File

@@ -0,0 +1,35 @@
<?xml version="1.0" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 20010904//EN"
"http://www.w3.org/TR/2001/REC-SVG-20010904/DTD/svg10.dtd">
<svg version="1.0" xmlns="http://www.w3.org/2000/svg"
width="256.000000pt" height="256.000000pt" viewBox="0 0 256.000000 256.000000"
preserveAspectRatio="xMidYMid meet">
<metadata>
Created by potrace 1.14, written by Peter Selinger 2001-2017
</metadata>
<g transform="translate(0.000000,256.000000) scale(0.100000,-0.100000)"
fill="#000000" stroke="none">
<path d="M0 1280 l0 -1280 1280 0 1280 0 0 1280 0 1280 -1280 0 -1280 0 0
-1280z m1555 936 c387 -112 675 -426 741 -810 24 -138 15 -352 -20 -470 -106
-353 -360 -606 -713 -712 -75 -22 -113 -27 -253 -31 -144 -5 -176 -2 -252 16
-316 75 -564 271 -707 557 -67 136 -92 237 -98 401 -7 164 5 253 47 378 106
315 349 556 665 659 114 37 180 45 350 41 125 -2 165 -7 240 -29z"/>
<path d="M1091 2165 c-364 -82 -629 -328 -738 -682 -24 -80 -27 -103 -27 -258
-1 -146 2 -182 21 -251 74 -271 259 -497 508 -621 477 -238 1061 -35 1294 450
61 126 83 220 88 379 7 194 -15 307 -93 461 -126 251 -340 428 -614 507 -99
29 -343 37 -439 15z m829 -473 c55 -54 100 -106 100 -116 0 -21 -184 -213
-212 -222 -24 -7 -48 12 -48 38 0 11 26 47 58 80 l57 60 -151 -3 c-145 -4
-152 -5 -190 -31 -22 -15 -78 -73 -124 -128 l-85 -99 -32 31 -32 31 30 38 c17
22 70 79 117 128 66 67 97 92 127 100 22 6 106 11 188 11 81 0 147 3 147 8 0
4 -25 31 -55 61 -55 55 -65 77 -43 99 25 25 50 10 148 -86z m-1002 -101 c46
-24 141 -121 312 -321 203 -236 290 -330 322 -346 22 -11 60 -14 169 -12 l141
3 -51 58 c-28 32 -51 64 -51 71 0 18 21 36 43 36 24 0 217 -193 217 -217 0
-19 -185 -210 -212 -219 -24 -7 -48 12 -48 38 0 10 23 43 50 72 l50 53 -52 7
c-29 3 -93 6 -142 6 -104 0 -152 12 -200 52 -19 15 -135 144 -258 286 -274
316 -305 347 -354 361 -22 6 -94 11 -161 11 -67 0 -128 3 -137 6 -22 9 -21 61
2 67 9 3 86 5 170 6 133 1 158 -2 190 -18z m227 -468 c23 -34 17 -43 -103
-172 -119 -128 -131 -133 -343 -129 l-154 3 0 35 c0 34 1 35 50 42 28 3 96 7
153 7 64 1 115 6 136 15 20 8 71 56 127 120 52 58 99 106 105 106 7 0 20 -12
29 -27z"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.0 KiB

View File

@@ -0,0 +1,19 @@
{
"name": "",
"short_name": "",
"icons": [
{
"src": "android-chrome-192x192.png",
"sizes": "192x192",
"type": "image/png"
},
{
"src": "android-chrome-256x256.png",
"sizes": "256x256",
"type": "image/png"
}
],
"theme_color": "#ffffff",
"background_color": "#ffffff",
"display": "standalone"
}

View File

Before

Width:  |  Height:  |  Size: 569 B

After

Width:  |  Height:  |  Size: 569 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 KiB

View File

@@ -0,0 +1,4 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg width="15" height="16.363636" viewBox="0 0 15 16.363636" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg">
<path d="m 14.318182,11.762045 v 1.1925 H 5.4102273 L 11.849318,7.1140909 C 12.234545,9.1561364 12.54,11.181818 14.318182,11.762045 Z m -6.7984093,4.601591 c 1.0759091,0 2.0256823,-0.955909 2.0256823,-2.045454 H 5.4545455 c 0,1.089545 0.9879545,2.045454 2.0652272,2.045454 z M 15,2.8622727 0.9177273,15.636136 0,14.627045 l 1.8443182,-1.6725 h -1.1625 v -1.1925 C 4.0070455,10.677273 2.1784091,4.5388636 5.3611364,2.6897727 5.8009091,2.4347727 6.0709091,1.9609091 6.0702273,1.4488636 v -0.00205 C 6.0702273,0.64772727 6.7104545,0 7.5,0 8.2895455,0 8.9297727,0.64772727 8.9297727,1.4468182 v 0.00205 C 8.9290909,1.9602319 9.199773,2.4354591 9.638864,2.6897773 10.364318,3.111141 10.827273,3.7568228 11.1525,4.5129591 L 14.085682,1.8531818 Z M 6.8181818,1.3636364 C 6.8181818,1.74 7.1236364,2.0454545 7.5,2.0454545 7.8763636,2.0454545 8.1818182,1.74 8.1818182,1.3636364 8.1818182,0.98795455 7.8763636,0.68181818 7.5,0.68181818 c -0.3763636,0 -0.6818182,0.30613637 -0.6818182,0.68181822 z" id="path2" style="fill:#f8321b;stroke-width:0.681818;fill-opacity:1"/>
</svg>

After

Width:  |  Height:  |  Size: 1.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@@ -0,0 +1,40 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
version="1.1"
id="Layer_1"
x="0px"
y="0px"
viewBox="0 0 115.77 122.88"
style="enable-background:new 0 0 115.77 122.88"
xml:space="preserve"
sodipodi:docname="copy.svg"
inkscape:version="1.1.1 (1:1.1+202109281949+c3084ef5ed)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"><defs
id="defs11" /><sodipodi:namedview
id="namedview9"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
inkscape:zoom="5.5501303"
inkscape:cx="57.83648"
inkscape:cy="61.439999"
inkscape:window-width="1920"
inkscape:window-height="1056"
inkscape:window-x="1920"
inkscape:window-y="0"
inkscape:window-maximized="1"
inkscape:current-layer="g6" /><style
type="text/css"
id="style2">.st0{fill-rule:evenodd;clip-rule:evenodd;}</style><g
id="g6"><path
class="st0"
d="M89.62,13.96v7.73h12.19h0.01v0.02c3.85,0.01,7.34,1.57,9.86,4.1c2.5,2.51,4.06,5.98,4.07,9.82h0.02v0.02 v73.27v0.01h-0.02c-0.01,3.84-1.57,7.33-4.1,9.86c-2.51,2.5-5.98,4.06-9.82,4.07v0.02h-0.02h-61.7H40.1v-0.02 c-3.84-0.01-7.34-1.57-9.86-4.1c-2.5-2.51-4.06-5.98-4.07-9.82h-0.02v-0.02V92.51H13.96h-0.01v-0.02c-3.84-0.01-7.34-1.57-9.86-4.1 c-2.5-2.51-4.06-5.98-4.07-9.82H0v-0.02V13.96v-0.01h0.02c0.01-3.85,1.58-7.34,4.1-9.86c2.51-2.5,5.98-4.06,9.82-4.07V0h0.02h61.7 h0.01v0.02c3.85,0.01,7.34,1.57,9.86,4.1c2.5,2.51,4.06,5.98,4.07,9.82h0.02V13.96L89.62,13.96z M79.04,21.69v-7.73v-0.02h0.02 c0-0.91-0.39-1.75-1.01-2.37c-0.61-0.61-1.46-1-2.37-1v0.02h-0.01h-61.7h-0.02v-0.02c-0.91,0-1.75,0.39-2.37,1.01 c-0.61,0.61-1,1.46-1,2.37h0.02v0.01v64.59v0.02h-0.02c0,0.91,0.39,1.75,1.01,2.37c0.61,0.61,1.46,1,2.37,1v-0.02h0.01h12.19V35.65 v-0.01h0.02c0.01-3.85,1.58-7.34,4.1-9.86c2.51-2.5,5.98-4.06,9.82-4.07v-0.02h0.02H79.04L79.04,21.69z M105.18,108.92V35.65v-0.02 h0.02c0-0.91-0.39-1.75-1.01-2.37c-0.61-0.61-1.46-1-2.37-1v0.02h-0.01h-61.7h-0.02v-0.02c-0.91,0-1.75,0.39-2.37,1.01 c-0.61,0.61-1,1.46-1,2.37h0.02v0.01v73.27v0.02h-0.02c0,0.91,0.39,1.75,1.01,2.37c0.61,0.61,1.46,1,2.37,1v-0.02h0.01h61.7h0.02 v0.02c0.91,0,1.75-0.39,2.37-1.01c0.61-0.61,1-1.46,1-2.37h-0.02V108.92L105.18,108.92z"
id="path4"
style="fill:#ffffff;fill-opacity:1" /></g></svg>

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

View File

@@ -0,0 +1,51 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<!-- Created with Inkscape (http://www.inkscape.org/) -->
<svg
width="20.108334mm"
height="21.43125mm"
viewBox="0 0 20.108334 21.43125"
version="1.1"
id="svg5"
xmlns:xlink="http://www.w3.org/1999/xlink"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs2" />
<g
id="layer1"
transform="translate(-141.05873,-76.816635)">
<image
width="20.108334"
height="21.43125"
preserveAspectRatio="none"
style="image-rendering:optimizeQuality"
xlink:href="data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAEwAAABRCAYAAAB430BuAAAABHNCSVQICAgIfAhkiAAABLxJREFU
eJztnN2Z2jgUhl8Z7petIGwF0WMXsFBBoIKwFWS2gmQryKSCJRXsTAUDBTDRVBCmgkAB9tkLexh+
bIONLGwP7xU2RjafpaOjoyNBCxHNQAJEfG5sl+3ZLrAWeAyST5/sF91mFH3bRbZbsAq4ClaQq2B7
iKYnmg9Z318F20ICRnj8pMOd6E3HscNVsATxmQD/oeghPCnDLO26q2AkYin+TQ7XREyyrn3zgu2J
BSEjZTBZ179pwQ7EEv7KaoovvFnBUsV6ZHrsd+0WTHhKPV1SLGivYEsA1KEtEs2grFitRjQ65VxP
fH5JgEjAKsvXupKwFfYxaYJeSeHcWqVSCuwD7/HQQD8lRHLWDStBWG3slbAElkTc5/lTZdkIJhpN
h6/UUZDyzAgZK8PKVoEKErE8HlD0bBVcI2ZqwdBWYbFgAT+g1UZwrBbcvRyIpofHJ1Sh1rQCZt1k
lN5msQAm8CoYoFF8KVHOsFtQ5aayExBUhpnopJl6J/3/FREGWCrxmaH40/4z1oyQ320Yf5dDozXC
P4QMCRkCY4S5w/tbMTtd4L2Ngo6wJmSQ4hfdScAU+OjgGazgOXEl8oJyof3Z6Spx0iTzgnLKsMoK
w9SRuoR3rHniVVMXwRpDXQR7d+kHOJV6CFZB0khVOBGsTcE6VzWsNVGQizfJptU+N4LlD3AbVfsu
XsOahhvB8nrB08IrtcGNYNIct+EYl2+S6mr0D8kLUMrV6BfFRTzOGs4Ey8p1aNrUnssaliaMO/vV
sfNi3AmW5j54DgUTO/dyJ1hab9iwHhLcNskP23ZMND0kewFBXek6vZvHg/hMiUPSN00z+OBasFig
y8wSRfnZ0adSBz+sUVwFK4jbJhnPP06To1ETczpcCnavHhltHd82LU0AXDbJMGXBU8PSBAA8Jxk0
wnNaqlGSJuAyg+dsXIV38iZqXU3iWsmodhetSNlDQgJGriZxbWVSe1hS/gQ+S/C6j4QEfES21vxU
icXsoC4vC5mqJvbybyXgduucG/YWaYmmj+IdHvpoxFdt8ltRP5h3iZjRqfBh60C4t1rNY7rxAU95
aYnhEp+/u8pgxGfeRCfyJIR5SkLfFOHYXMMzu63PEDF9WQnSo8MUmhduyUWYEzGyvnRmU3683ugG
GAG/2bqJU4RnFDNCpsfWb5chswUnwb5Xg+hxiyo9w7MGJoSVpmYulam+A8scS+5nPYtf+s9mpZw7
J1nayDnCVuu4Ck+E6DqIBYDHHR1+is/n8kVUhfBExMBFMzm4taafkXcWL9BSfBG/nNN8sutYcE3S
d7XI3o6lSpIe/xcAIX/svzDxMVu22BAyLNKL2q9hwrdLiZWwXbP6B99GDLaGSpoOD6JPn4yxK1i8
B0StY1zKsCJiQNxzQ0HRbAm2BsZN2TBDGVaE5USzIVjsNix2VrzWHmUwB6J5fD32uyKCzQ7OxG5D
vzZuQ0E2osXjRlBMjvWe5WtYPE4b2BynXQJlMEToTUegmEiwM1mzQ1nBvqvH5ov1wlZHcA+AZHdc
xQW7vNuQS9kBtzKs1IIRMM7b0q/YvGTzto4qbFutdV5FnLtLk2x3JVWUfXKTbIu9Opc2J6Osj19S
HLfJKO64r6rg/wFBX3+2ZapW8wAAAABJRU5ErkJggg==
"
id="image832"
x="141.05873"
y="76.816635" />
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.4 KiB

View File

@@ -0,0 +1,84 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
xmlns:dc="http://purl.org/dc/elements/1.1/"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns="http://www.w3.org/2000/svg"
version="1.1"
id="Capa_1"
x="0px"
y="0px"
viewBox="0 0 15 14.998326"
xml:space="preserve"
width="15"
height="14.998326"><metadata
id="metadata39"><rdf:RDF><cc:Work
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /><dc:title></dc:title></cc:Work></rdf:RDF></metadata><defs
id="defs37" />
<path
id="path2"
style="fill:#1b98f8;fill-opacity:1;stroke-width:0.0292893"
d="M 7.4975161,6.5052867e-4 C 4.549072,-0.04028702 1.7055675,1.8548221 0.58868606,4.5801341 -0.57739762,7.2574642 0.02596981,10.583326 2.069916,12.671949 4.0364753,14.788409 7.2763651,15.56067 9.989207,14.57284 12.801145,13.617602 14.87442,10.855325 14.985833,7.8845744 15.172496,4.9966544 13.49856,2.1100704 10.911002,0.8209349 9.8598067,0.28073592 8.6791261,-0.00114855 7.4975161,6.5052867e-4 Z M 6.5602569,10.251923 c -0.00509,0.507593 -0.5693885,0.488472 -0.9352002,0.468629 -0.3399386,0.0018 -0.8402048,0.07132 -0.9297965,-0.374189 -0.015842,-1.8973128 -0.015872,-3.7979649 0,-5.6952784 0.1334405,-0.5224315 0.7416869,-0.3424086 1.1377562,-0.374189 0.3969969,-0.084515 0.8245634,0.1963256 0.7272405,0.6382917 0,1.7789118 0,3.5578239 0,5.3367357 z m 3.7490371,0 c -0.0051,0.507593 -0.5693888,0.488472 -0.9352005,0.468629 -0.3399386,0.0018 -0.8402048,0.07132 -0.9297965,-0.374189 -0.015842,-1.8973128 -0.015872,-3.7979649 0,-5.6952784 0.1334405,-0.5224315 0.7416869,-0.3424086 1.1377562,-0.374189 0.3969969,-0.084515 0.8245638,0.1963256 0.7272408,0.6382917 0,1.7789118 0,3.5578239 0,5.3367357 z" />
<g
id="g4"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g6"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g8"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g10"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g12"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g14"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g16"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g18"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g20"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g22"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g24"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g26"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g28"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g30"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g32"
transform="translate(-0.01903604,0.02221043)">
</g>
</svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@@ -0,0 +1,9 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg xmlns="http://www.w3.org/2000/svg" width="75.320129mm" height="92.604164mm" viewBox="0 0 75.320129 92.604164">
<g transform="translate(53.548057 -183.975276) scale(1.4843)">
<path fill="#ff2116" d="M-29.632812 123.94727c-3.551967 0-6.44336 2.89347-6.44336 6.44531v49.49804c0 3.55185 2.891393 6.44532 6.44336 6.44532H8.2167969c3.5519661 0 6.4433591-2.89335 6.4433591-6.44532v-40.70117s.101353-1.19181-.416015-2.35156c-.484969-1.08711-1.275391-1.84375-1.275391-1.84375a1.0584391 1.0584391 0 0 0-.0059-.008l-9.3906254-9.21094a1.0584391 1.0584391 0 0 0-.015625-.0156s-.8017392-.76344-1.9902344-1.27344c-1.39939552-.6005-2.8417968-.53711-2.8417968-.53711l.021484-.002z" color="#000" font-family="sans-serif" overflow="visible" paint-order="markers fill stroke" style="line-height:normal;font-variant-ligatures:normal;font-variant-position:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-alternates:normal;font-feature-settings:normal;text-indent:0;text-align:start;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;text-transform:none;text-orientation:mixed;white-space:normal;shape-padding:0;isolation:auto;mix-blend-mode:normal;solid-color:#000000;solid-opacity:1"/>
<path fill="#f5f5f5" d="M-29.632812 126.06445h28.3789058a1.0584391 1.0584391 0 0 0 .021484 0s1.13480448.011 1.96484378.36719c.79889772.34282 1.36536982.86176 1.36914062.86524.0000125.00001.00391.004.00391.004l9.3671868 9.18945s.564354.59582.837891 1.20899c.220779.49491.234375 1.40039.234375 1.40039a1.0584391 1.0584391 0 0 0-.002.0449v40.74609c0 2.41592-1.910258 4.32813-4.3261717 4.32813H-29.632812c-2.415914 0-4.326172-1.91209-4.326172-4.32813v-49.49804c0-2.41603 1.910258-4.32813 4.326172-4.32813z" color="#000" font-family="sans-serif" overflow="visible" paint-order="markers fill stroke" style="line-height:normal;font-variant-ligatures:normal;font-variant-position:normal;font-variant-caps:normal;font-variant-numeric:normal;font-variant-alternates:normal;font-feature-settings:normal;text-indent:0;text-align:start;text-decoration-line:none;text-decoration-style:solid;text-decoration-color:#000000;text-transform:none;text-orientation:mixed;white-space:normal;shape-padding:0;isolation:auto;mix-blend-mode:normal;solid-color:#000000;solid-opacity:1"/>
<path fill="#ff2116" d="M-23.40766 161.09299c-1.45669-1.45669.11934-3.45839 4.39648-5.58397l2.69124-1.33743 1.04845-2.29399c.57665-1.26169 1.43729-3.32036 1.91254-4.5748l.8641-2.28082-.59546-1.68793c-.73217-2.07547-.99326-5.19438-.52872-6.31588.62923-1.51909 2.69029-1.36323 3.50626.26515.63727 1.27176.57212 3.57488-.18329 6.47946l-.6193 2.38125.5455.92604c.30003.50932 1.1764 1.71867 1.9475 2.68743l1.44924 1.80272 1.8033728-.23533c5.72900399-.74758 7.6912472.523 7.6912472 2.34476 0 2.29921-4.4984914 2.48899-8.2760865-.16423-.8499666-.59698-1.4336605-1.19001-1.4336605-1.19001s-2.3665326.48178-3.531704.79583c-1.202707.32417-1.80274.52719-3.564509 1.12186 0 0-.61814.89767-1.02094 1.55026-1.49858 2.4279-3.24833 4.43998-4.49793 5.1723-1.3991.81993-2.86584.87582-3.60433.13733zm2.28605-.81668c.81883-.50607 2.47616-2.46625 3.62341-4.28553l.46449-.73658-2.11497 1.06339c-3.26655 1.64239-4.76093 3.19033-3.98386 4.12664.43653.52598.95874.48237 2.01093-.16792zm21.21809-5.95578c.80089-.56097.68463-1.69142-.22082-2.1472-.70466-.35471-1.2726074-.42759-3.1031574-.40057-1.1249.0767-2.9337647.3034-3.2403347.37237 0 0 .993716.68678 1.434896.93922.58731.33544 2.0145161.95811 3.0565161 1.27706 1.02785.31461 1.6224.28144 2.0729-.0409zm-8.53152-3.54594c-.4847-.50952-1.30889-1.57296-1.83152-2.3632-.68353-.89643-1.02629-1.52887-1.02629-1.52887s-.4996 1.60694-.90948 2.57394l-1.27876 3.16076-.37075.71695s1.971043-.64627 2.97389-.90822c1.0621668-.27744 3.21787-.70134 3.21787-.70134zm-2.74938-11.02573c.12363-1.0375.1761-2.07346-.15724-2.59587-.9246-1.01077-2.04057-.16787-1.85154 2.23517.0636.8084.26443 2.19033.53292 3.04209l.48817 1.54863.34358-1.16638c.18897-.64151.47882-2.02015.64411-3.06364z"/>
<path fill="#2c2c2c" d="M-20.930423 167.83862h2.364986q1.133514 0 1.840213.2169.706698.20991 1.189489.9446.482795.72769.482795 1.75625 0 .94459-.391832 1.6233-.391833.67871-1.056548.97958-.65772.30087-2.02913.30087h-.818651v3.72941h-1.581322zm1.581322 1.22447v3.33058h.783664q1.049552 0 1.44838-.39184.405826-.39183.405826-1.27345 0-.65772-.265887-1.06355-.265884-.41282-.587747-.50378-.314866-.098-1.000572-.098zm5.50664-1.22447h2.148082q1.560333 0 2.4909318.55276.9375993.55276 1.4133973 1.6443.482791 1.09153.482791 2.42096 0 1.3994-.4338151 2.49793-.4268149 1.09153-1.3154348 1.76324-.8816233.67172-2.5189212.67172h-2.267031zm1.581326 1.26645v7.018h.657715q1.378411 0 2.001144-.9516.6227329-.95858.6227329-2.5539 0-3.5125-2.6238769-3.5125zm6.4722254-1.26645h5.30372941v1.26645H-4.2075842v2.85478h2.9807225v1.26646h-2.9807225v4.16322h-1.5813254z" font-family="Franklin Gothic Medium Cond" letter-spacing="0" style="line-height:125%;-inkscape-font-specification:'Franklin Gothic Medium Cond'" word-spacing="4.26000023"/>
</g>
</svg>

After

Width:  |  Height:  |  Size: 5.0 KiB

View File

@@ -0,0 +1,122 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
version="1.1"
id="Capa_1"
x="0px"
y="0px"
viewBox="0 0 15 14.998326"
xml:space="preserve"
width="15"
height="14.998326"
sodipodi:docname="play.svg"
inkscape:version="1.1.1 (1:1.1+202109281949+c3084ef5ed)"
xmlns:inkscape="http://www.inkscape.org/namespaces/inkscape"
xmlns:sodipodi="http://sodipodi.sourceforge.net/DTD/sodipodi-0.dtd"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg"
xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
xmlns:cc="http://creativecommons.org/ns#"
xmlns:dc="http://purl.org/dc/elements/1.1/"><sodipodi:namedview
id="namedview21"
pagecolor="#ffffff"
bordercolor="#666666"
borderopacity="1.0"
inkscape:pageshadow="2"
inkscape:pageopacity="0.0"
inkscape:pagecheckerboard="0"
showgrid="false"
inkscape:zoom="45.47174"
inkscape:cx="7.4991632"
inkscape:cy="7.4991632"
inkscape:window-width="1554"
inkscape:window-height="896"
inkscape:window-x="3048"
inkscape:window-y="227"
inkscape:window-maximized="0"
inkscape:current-layer="Capa_1" /><metadata
id="metadata39"><rdf:RDF><cc:Work
rdf:about=""><dc:format>image/svg+xml</dc:format><dc:type
rdf:resource="http://purl.org/dc/dcmitype/StillImage" /></cc:Work></rdf:RDF></metadata><defs
id="defs37" />
<path
id="path2"
style="fill:#1b98f8;fill-opacity:1;stroke-width:0.0292893"
d="M 7.4980469,0 C 4.5496028,-0.04093755 1.7047721,1.8547661 0.58789062,4.5800781 -0.57819305,7.2574082 0.02636631,10.583252 2.0703125,12.671875 4.0368718,14.788335 7.2754393,15.560096 9.9882812,14.572266 12.800219,13.617028 14.874915,10.855516 14.986328,7.8847656 15.172991,4.9968456 13.497714,2.109448 10.910156,0.8203125 9.858961,0.28011352 8.6796569,-0.00179908 7.4980469,0 Z"
sodipodi:nodetypes="ccccccc" />
<g
id="g4"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g6"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g8"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g10"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g12"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g14"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g16"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g18"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g20"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g22"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g24"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g26"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g28"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g30"
transform="translate(-0.01903604,0.02221043)">
</g>
<g
id="g32"
transform="translate(-0.01903604,0.02221043)">
</g>
<path
sodipodi:type="star"
style="fill:#ffffff;fill-opacity:1;stroke-width:37.7953;paint-order:stroke fill markers"
id="path1203"
inkscape:flatsided="false"
sodipodi:sides="3"
sodipodi:cx="7.2964563"
sodipodi:cy="7.3240671"
sodipodi:r1="3.805218"
sodipodi:r2="1.9026089"
sodipodi:arg1="-0.0017436774"
sodipodi:arg2="1.0454539"
inkscape:rounded="0"
inkscape:randomized="0"
d="M 11.101669,7.317432 8.2506324,8.9701135 5.3995964,10.622795 5.3938504,7.3273846 5.3881041,4.0319742 8.2448863,5.6747033 Z"
inkscape:transform-center-x="-0.94843001"
inkscape:transform-center-y="0.0033175346" /></svg>

After

Width:  |  Height:  |  Size: 3.5 KiB

View File

@@ -0,0 +1,2 @@
<?xml version="1.0" encoding="UTF-8"?>
<svg width="83.39" height="89.648" enable-background="new 0 0 122.406 122.881" version="1.1" viewBox="0 0 83.39 89.648" xml:space="preserve" xmlns="http://www.w3.org/2000/svg"><g transform="translate(5e-4 -33.234)"><path d="m44.239 42.946-39.111 39.896 34.908 34.91 39.09-39.876-1.149-34.931zm-0.91791 42.273c0.979-0.979 1.507-1.99 1.577-3.027 0.077-1.043-0.248-2.424-0.967-4.135-0.725-1.717-1.348-3.346-1.87-4.885s-0.814-3.014-0.897-4.432c-0.07-1.42 0.134-2.768 0.624-4.045 0.477-1.279 1.348-2.545 2.607-3.804 2.099-2.099 4.535-3.123 7.314-3.065 2.773 0.063 5.457 1.158 8.04 3.294l2.881 3.034c1.946 2.607 2.799 5.33 2.557 8.166-0.235 2.83-1.532 5.426-3.893 7.785l-6.296-6.297c1.291-1.291 2.035-2.531 2.238-3.727 0.191-1.197-0.165-2.252-1.081-3.168-0.821-0.82-1.717-1.195-2.69-1.139-0.967 0.064-1.908 0.547-2.817 1.457-0.922 0.922-1.393 1.914-1.412 2.977s0.306 2.416 0.973 4.064c0.661 1.652 1.24 3.25 1.736 4.801 0.496 1.553 0.782 3.035 0.858 4.445 0.076 1.426-0.127 2.787-0.591 4.104-0.477 1.316-1.336 2.596-2.588 3.848-2.125 2.125-4.522 3.186-7.212 3.18s-5.311-1.063-7.855-3.16l-3.747 3.746-2.964-2.965 3.766-3.764c-2.423-2.996-3.568-5.998-3.447-9.02 0.127-3.014 1.476-5.813 4.045-8.383l6.278 6.277c-1.412 1.412-2.175 2.799-2.277 4.16-0.108 1.367 0.414 2.627 1.571 3.783 0.839 0.84 1.755 1.26 2.741 1.242 0.985-0.017 1.92-0.47 2.798-1.347zm21.127-46.435h17.457c-0.0269 2.2368 0.69936 16.025 0.69936 16.025l0.785 23.858c0.019 0.609-0.221 1.164-0.619 1.564l5e-3 4e-3 -41.236 42.022c-0.82213 0.8378-2.175 0.83-3.004 0l-37.913-37.91c-0.83-0.83-0.83-2.176 0-3.006l41.236-42.021c0.39287-0.42671 1.502-0.53568 1.502-0.53568zm18.011 11.59c-59.392-29.687-29.696-14.843 0 0z"/></g></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@@ -0,0 +1,20 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg
width="18"
height="19.92"
viewBox="0 0 18 19.92"
version="1.1"
id="svg6"
xmlns="http://www.w3.org/2000/svg"
xmlns:svg="http://www.w3.org/2000/svg">
<defs
id="defs10" />
<path
d="M -3,-2 H 21 V 22 H -3 Z"
fill="none"
id="path2" />
<path
d="m 15,14.08 c -0.76,0 -1.44,0.3 -1.96,0.77 L 5.91,10.7 C 5.96,10.47 6,10.24 6,10 6,9.76 5.96,9.53 5.91,9.3 L 12.96,5.19 C 13.5,5.69 14.21,6 15,6 16.66,6 18,4.66 18,3 18,1.34 16.66,0 15,0 c -1.66,0 -3,1.34 -3,3 0,0.24 0.04,0.47 0.09,0.7 L 5.04,7.81 C 4.5,7.31 3.79,7 3,7 1.34,7 0,8.34 0,10 c 0,1.66 1.34,3 3,3 0.79,0 1.5,-0.31 2.04,-0.81 l 7.12,4.16 c -0.05,0.21 -0.08,0.43 -0.08,0.65 0,1.61 1.31,2.92 2.92,2.92 1.61,0 2.92,-1.31 2.92,-2.92 0,-1.61 -1.31,-2.92 -2.92,-2.92 z"
id="path4"
style="fill:#ffffff;fill-opacity:1" />
</svg>

After

Width:  |  Height:  |  Size: 892 B

View File

@@ -0,0 +1,5 @@
<?xml version="1.0" encoding="UTF-8" standalone="no"?>
<svg width="18" height="19.92" viewBox="0 0 18 19.92" xmlns="http://www.w3.org/2000/svg" xmlns:svg="http://www.w3.org/2000/svg">
<path d="M -3,-2 H 21 V 22 H -3 Z" fill="none" id="path2"/>
<path d="m 15,14.08 c -0.76,0 -1.44,0.3 -1.96,0.77 L 5.91,10.7 C 5.96,10.47 6,10.24 6,10 6,9.76 5.96,9.53 5.91,9.3 L 12.96,5.19 C 13.5,5.69 14.21,6 15,6 16.66,6 18,4.66 18,3 18,1.34 16.66,0 15,0 c -1.66,0 -3,1.34 -3,3 0,0.24 0.04,0.47 0.09,0.7 L 5.04,7.81 C 4.5,7.31 3.79,7 3,7 1.34,7 0,8.34 0,10 c 0,1.66 1.34,3 3,3 0.79,0 1.5,-0.31 2.04,-0.81 l 7.12,4.16 c -0.05,0.21 -0.08,0.43 -0.08,0.65 0,1.61 1.31,2.92 2.92,2.92 1.61,0 2.92,-1.31 2.92,-2.92 0,-1.61 -1.31,-2.92 -2.92,-2.92 z" id="path4" style="fill:#0078e7;fill-opacity:1"/>
</svg>

After

Width:  |  Height:  |  Size: 787 B

View File

@@ -0,0 +1,454 @@
$(document).ready(function () {
// duplicate
var csrftoken = $('input[name=csrf_token]').val();
$.ajaxSetup({
beforeSend: function (xhr, settings) {
if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken)
}
}
})
var browsersteps_session_id;
var browserless_seconds_remaining = 0;
var apply_buttons_disabled = false;
var include_text_elements = $("#include_text_elements");
var xpath_data = false;
var current_selected_i;
var state_clicked = false;
var c;
// redline highlight context
var ctx;
var last_click_xy = {'x': -1, 'y': -1}
$(window).resize(function () {
set_scale();
});
// Should always be disabled
$('#browser_steps >li:first-child select').val('Goto site').attr('disabled', 'disabled');
$('#browsersteps-click-start').click(function () {
$("#browsersteps-click-start").fadeOut();
$("#browsersteps-selector-wrapper .spinner").fadeIn();
start();
});
$('a#browsersteps-tab').click(function () {
reset();
});
window.addEventListener('hashchange', function () {
if (window.location.hash == '#browser-steps') {
reset();
}
});
function reset() {
xpath_data = false;
$('#browsersteps-img').removeAttr('src');
$("#browsersteps-click-start").show();
$("#browsersteps-selector-wrapper .spinner").hide();
browserless_seconds_remaining = 0;
browsersteps_session_id = false;
apply_buttons_disabled = false;
ctx.clearRect(0, 0, c.width, c.height);
set_first_gotosite_disabled();
}
function set_first_gotosite_disabled() {
$('#browser_steps >li:first-child select').val('Goto site').attr('disabled', 'disabled');
$('#browser_steps >li:first-child').css('opacity', '0.5');
}
// Show seconds remaining until playwright/browserless needs to restart the session
// (See comment at the top of changedetectionio/blueprint/browser_steps/__init__.py )
setInterval(() => {
if (browserless_seconds_remaining >= 1) {
document.getElementById('browserless-seconds-remaining').innerText = browserless_seconds_remaining + " seconds remaining in session";
browserless_seconds_remaining -= 1;
}
}, "1000")
function set_scale() {
// some things to check if the scaling doesnt work
// - that the widths/sizes really are about the actual screen size cat elements.json |grep -o width......|sort|uniq
selector_image = $("img#browsersteps-img")[0];
selector_image_rect = selector_image.getBoundingClientRect();
// make the canvas and input steps the same size as the image
$('#browsersteps-selector-canvas').attr('height', selector_image_rect.height).attr('width', selector_image_rect.width);
//$('#browsersteps-selector-wrapper').attr('width', selector_image_rect.width);
$('#browser-steps-ui').attr('width', selector_image_rect.width);
x_scale = selector_image_rect.width / xpath_data['browser_width'];
y_scale = selector_image_rect.height / selector_image.naturalHeight;
ctx.strokeStyle = 'rgba(255,0,0, 0.9)';
ctx.fillStyle = 'rgba(255,0,0, 0.1)';
ctx.lineWidth = 3;
console.log("scaling set x: " + x_scale + " by y:" + y_scale);
}
// bootstrap it, this will trigger everything else
$('#browsersteps-img').bind('load', function () {
$('body').addClass('full-width');
console.log("Loaded background...");
document.getElementById("browsersteps-selector-canvas");
c = document.getElementById("browsersteps-selector-canvas");
// redline highlight context
ctx = c.getContext("2d");
// @todo is click better?
$('#browsersteps-selector-canvas').off("mousemove mousedown click");
// Undo disable_browsersteps_ui
$("#browser-steps-ui").css('opacity', '1.0');
// init
set_scale();
// @todo click ? some better library?
$('#browsersteps-selector-canvas').bind('click', function (e) {
// https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent
e.preventDefault()
});
$('#browsersteps-selector-canvas').bind('mousedown', function (e) {
// https://developer.mozilla.org/en-US/docs/Web/API/MouseEvent
e.preventDefault()
console.log(e);
console.log("current xpath in index is " + current_selected_i);
last_click_xy = {'x': parseInt((1 / x_scale) * e.offsetX), 'y': parseInt((1 / y_scale) * e.offsetY)}
process_selected(current_selected_i);
current_selected_i = false;
// if process selected returned false, then best we can do is offer a x,y click :(
if (!found_something) {
var first_available = $("ul#browser_steps li.empty").first();
$('select', first_available).val('Click X,Y').change();
$('input[type=text]', first_available).first().val(last_click_xy['x'] + ',' + last_click_xy['y']);
draw_circle_on_canvas(e.offsetX, e.offsetY);
}
});
$('#browsersteps-selector-canvas').bind('mousemove', function (e) {
if (!xpath_data) {
return;
}
// checkbox if find elements is enabled
ctx.clearRect(0, 0, c.width, c.height);
ctx.fillStyle = 'rgba(255,0,0, 0.1)';
ctx.strokeStyle = 'rgba(255,0,0, 0.9)';
// Add in offset
if ((typeof e.offsetX === "undefined" || typeof e.offsetY === "undefined") || (e.offsetX === 0 && e.offsetY === 0)) {
var targetOffset = $(e.target).offset();
e.offsetX = e.pageX - targetOffset.left;
e.offsetY = e.pageY - targetOffset.top;
}
current_selected_i = false;
// Reverse order - the most specific one should be deeper/"laster"
// Basically, find the most 'deepest'
//$('#browsersteps-selector-canvas').css('cursor', 'pointer');
for (var i = xpath_data['size_pos'].length; i !== 0; i--) {
// draw all of them? let them choose somehow?
var sel = xpath_data['size_pos'][i - 1];
// If we are in a bounding-box
if (e.offsetY > sel.top * y_scale && e.offsetY < sel.top * y_scale + sel.height * y_scale
&&
e.offsetX > sel.left * y_scale && e.offsetX < sel.left * y_scale + sel.width * y_scale
) {
// Only highlight these interesting types
if (1) {
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
current_selected_i = i - 1;
break;
// find the smallest one at this x,y
// does it mean sort the xpath list by size (w*h) i think so!
} else {
if (include_text_elements[0].checked === true) {
// blue one with background instead?
ctx.fillStyle = 'rgba(0,0,255, 0.1)';
ctx.strokeStyle = 'rgba(0,0,200, 0.7)';
$('#browsersteps-selector-canvas').css('cursor', 'grab');
ctx.strokeRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
ctx.fillRect(sel.left * x_scale, sel.top * y_scale, sel.width * x_scale, sel.height * y_scale);
current_selected_i = i - 1;
break;
}
}
}
}
}.debounce(10));
});
// $("#browser-steps-fieldlist").bind('mouseover', function(e) {
// console.log(e.xpath_data_index);
// });
// callback for clicking on an xpath on the canvas
function process_selected(xpath_data_index) {
found_something = false;
var first_available = $("ul#browser_steps li.empty").first();
if (xpath_data_index !== false) {
// Nothing focused, so fill in a new one
// if inpt type button or <button>
// from the top, find the next not used one and use it
var x = xpath_data['size_pos'][xpath_data_index];
console.log(x);
if (x && first_available.length) {
// @todo will it let you click shit that has a layer ontop? probably not.
if (x['tagtype'] === 'text' || x['tagtype'] === 'email' || x['tagName'] === 'textarea' || x['tagtype'] === 'password' || x['tagtype'] === 'search') {
$('select', first_available).val('Enter text in field').change();
$('input[type=text]', first_available).first().val(x['xpath']);
$('input[placeholder="Value"]', first_available).addClass('ok').click().focus();
found_something = true;
} else {
if (x['isClickable'] || x['tagName'].startsWith('h') || x['tagName'] === 'a' || x['tagName'] === 'button' || x['tagtype'] === 'submit' || x['tagtype'] === 'checkbox' || x['tagtype'] === 'radio' || x['tagtype'] === 'li') {
$('select', first_available).val('Click element').change();
$('input[type=text]', first_available).first().val(x['xpath']);
found_something = true;
}
}
first_available.xpath_data_index = xpath_data_index;
if (!found_something) {
if (include_text_elements[0].checked === true) {
// Suggest that we use as filter?
// @todo filters should always be in the last steps, nothing non-filter after it
found_something = true;
ctx.strokeStyle = 'rgba(0,0,255, 0.9)';
ctx.fillStyle = 'rgba(0,0,255, 0.1)';
$('select', first_available).val('Extract text and use as filter').change();
$('input[type=text]', first_available).first().val(x['xpath']);
include_text_elements[0].checked = false;
}
}
}
}
}
function draw_circle_on_canvas(x, y) {
ctx.beginPath();
ctx.arc(x, y, 8, 0, 2 * Math.PI, false);
ctx.fillStyle = 'rgba(255,0,0, 0.6)';
ctx.fill();
}
function start() {
console.log("Starting browser-steps UI");
browsersteps_session_id = Date.now();
// @todo This setting of the first one should be done at the datalayer but wtforms doesnt wanna play nice
$('#browser_steps >li:first-child').removeClass('empty');
set_first_gotosite_disabled();
$('#browser-steps-ui .loader .spinner').show();
$('.clear,.remove', $('#browser_steps >li:first-child')).hide();
$.ajax({
type: "GET",
url: browser_steps_sync_url + "&browsersteps_session_id=" + browsersteps_session_id,
statusCode: {
400: function () {
// More than likely the CSRF token was lost when the server restarted
alert("There was a problem processing the request, please reload the page.");
}
}
}).done(function (data) {
xpath_data = data.xpath_data;
$("#loading-status-text").fadeIn();
// This should trigger 'Goto site'
console.log("Got startup response, requesting Goto-Site (first) step fake click");
$('#browser_steps >li:first-child .apply').click();
browserless_seconds_remaining = data.browser_time_remaining;
set_first_gotosite_disabled();
}).fail(function (data) {
console.log(data);
alert('There was an error communicating with the server.');
});
}
function disable_browsersteps_ui() {
set_first_gotosite_disabled();
$("#browser-steps-ui").css('opacity', '0.3');
$('#browsersteps-selector-canvas').off("mousemove mousedown click");
}
////////////////////////// STEPS UI ////////////////////
$('ul#browser_steps [type="text"]').keydown(function (e) {
if (e.keyCode === 13) {
// hitting [enter] in a browser-step input should trigger the 'Apply'
e.preventDefault();
$(".apply", $(this).closest('li')).click();
return false;
}
});
// Look up which step was selected, and enable or disable the related extra fields
// So that people using it dont' get confused
$('ul#browser_steps select').on("change", function () {
var config = browser_steps_config[$(this).val()].split(' ');
var elem_selector = $('tr:nth-child(2) input', $(this).closest('tbody'));
var elem_value = $('tr:nth-child(3) input', $(this).closest('tbody'));
if (config[0] == 0) {
$(elem_selector).fadeOut();
} else {
$(elem_selector).fadeIn();
}
if (config[1] == 0) {
$(elem_value).fadeOut();
} else {
$(elem_value).fadeIn();
}
if ($(this).val() === 'Click X,Y' && last_click_xy['x'] > 0 && $(elem_value).val().length === 0) {
// @todo handle scale
$(elem_value).val(last_click_xy['x'] + ',' + last_click_xy['y']);
}
}).change();
function set_greyed_state() {
$('ul#browser_steps select').not('option:selected[value="Choose one"]').closest('li').removeClass('empty');
$('ul#browser_steps select option:selected[value="Choose one"]').closest('li').addClass('empty');
}
// Add the extra buttons to the steps
$('ul#browser_steps li').each(function (i) {
var s = '<div class="control">' + '<a data-step-index=' + i + ' class="pure-button button-secondary button-green button-xsmall apply" >Apply</a>&nbsp;';
if (i > 0) {
// The first step never gets these (Goto-site)
s += '<a data-step-index=' + i + ' class="pure-button button-secondary button-xsmall clear" >Clear</a>&nbsp;' +
'<a data-step-index=' + i + ' class="pure-button button-secondary button-red button-xsmall remove" >Remove</a>';
}
s += '</div>';
$(this).append(s)
}
);
$('ul#browser_steps li .control .clear').click(function (element) {
$("select", $(this).closest('li')).val("Choose one").change();
$(":text", $(this).closest('li')).val('');
});
$('ul#browser_steps li .control .remove').click(function (element) {
// so you wanna remove the 2nd (3rd spot 0,1,2,...)
var p = $("#browser_steps li").index($(this).closest('li'));
var elem_to_remove = $("#browser_steps li")[p];
$('.clear', elem_to_remove).click();
$("#browser_steps li").slice(p, 10).each(function (index) {
// get the next one's value from where we clicked
var next = $("#browser_steps li")[p + index + 1];
if (next) {
// and set THIS ones value from the next one
var n = $('input', next);
$("select", $(this)).val($('select', next).val());
$('input', this)[0].value = $(n)[0].value;
$('input', this)[1].value = $(n)[1].value;
// Triggers reconfiguring the field based on the system config
$("select", $(this)).change();
}
});
// Reset their hidden/empty states
set_greyed_state();
});
$('ul#browser_steps li .control .apply').click(function (event) {
// sequential requests @todo refactor
if (apply_buttons_disabled) {
return;
}
var current_data = $(event.currentTarget).closest('li');
$('#browser-steps-ui .loader .spinner').fadeIn();
apply_buttons_disabled = true;
$('ul#browser_steps li .control .apply').css('opacity', 0.5);
$("#browsersteps-img").css('opacity', 0.65);
var is_last_step = 0;
var step_n = $(event.currentTarget).data('step-index');
// On the last step, we should also be getting data ready for the visual selector
$('ul#browser_steps li select').each(function (i) {
if ($(this).val() !== 'Choose one') {
is_last_step += 1;
}
});
if (is_last_step == (step_n + 1)) {
is_last_step = true;
} else {
is_last_step = false;
}
console.log("Requesting step via POST " + $("select[id$='operation']", current_data).first().val());
// POST the currently clicked step form widget back and await response, redraw
$.ajax({
method: "POST",
url: browser_steps_sync_url + "&browsersteps_session_id=" + browsersteps_session_id,
data: {
'operation': $("select[id$='operation']", current_data).first().val(),
'selector': $("input[id$='selector']", current_data).first().val(),
'optional_value': $("input[id$='optional_value']", current_data).first().val(),
'step_n': step_n,
'is_last_step': is_last_step
},
statusCode: {
400: function () {
// More than likely the CSRF token was lost when the server restarted
alert("There was a problem processing the request, please reload the page.");
$("#loading-status-text").hide();
$('#browser-steps-ui .loader .spinner').fadeOut();
},
401: function (data) {
// More than likely the CSRF token was lost when the server restarted
alert(data.responseText);
$("#loading-status-text").hide();
$('#browser-steps-ui .loader .spinner').fadeOut();
}
}
}).done(function (data) {
// it should return the new state (selectors available and screenshot)
xpath_data = data.xpath_data;
$('#browsersteps-img').attr('src', data.screenshot);
$('#browser-steps-ui .loader .spinner').fadeOut();
apply_buttons_disabled = false;
$("#browsersteps-img").css('opacity', 1);
$('ul#browser_steps li .control .apply').css('opacity', 1);
browserless_seconds_remaining = data.browser_time_remaining;
$("#loading-status-text").hide();
set_first_gotosite_disabled();
}).fail(function (data) {
console.log(data);
if (data.responseText.includes("Browser session expired")) {
disable_browsersteps_ui();
}
apply_buttons_disabled = false;
$("#loading-status-text").hide();
$('ul#browser_steps li .control .apply').css('opacity', 1);
$("#browsersteps-img").css('opacity', 1);
});
});
$("ul#browser_steps select").change(function () {
set_greyed_state();
}).change();
});

View File

@@ -0,0 +1,25 @@
$(document).ready(function () {
// Load it when the #screenshot tab is in use, so we dont give a slow experience when waiting for the text diff to load
window.addEventListener('hashchange', function (e) {
toggle(location.hash);
}, false);
toggle(location.hash);
function toggle(hash_name) {
if (hash_name === '#screenshot') {
$("img#screenshot-img").attr('src', screenshot_url);
$("#settings").hide();
} else if (hash_name === '#error-screenshot') {
$("img#error-screenshot-img").attr('src', error_screenshot_url);
$("#settings").hide();
} else if (hash_name === '#extract') {
$("#settings").hide();
}
else {
$("#settings").show();
}
}
});

View File

@@ -0,0 +1,110 @@
var a = document.getElementById("a");
var b = document.getElementById("b");
var result = document.getElementById("result");
function changed() {
// https://github.com/kpdecker/jsdiff/issues/389
// I would love to use `{ignoreWhitespace: true}` here but it breaks the formatting
options = {
ignoreWhitespace: document.getElementById("ignoreWhitespace").checked,
};
var diff = Diff[window.diffType](a.textContent, b.textContent, options);
var fragment = document.createDocumentFragment();
for (var i = 0; i < diff.length; i++) {
if (diff[i].added && diff[i + 1] && diff[i + 1].removed) {
var swap = diff[i];
diff[i] = diff[i + 1];
diff[i + 1] = swap;
}
var node;
if (diff[i].removed) {
node = document.createElement("del");
node.classList.add("change");
const wrapper = node.appendChild(document.createElement("span"));
wrapper.appendChild(document.createTextNode(diff[i].value));
} else if (diff[i].added) {
node = document.createElement("ins");
node.classList.add("change");
const wrapper = node.appendChild(document.createElement("span"));
wrapper.appendChild(document.createTextNode(diff[i].value));
} else {
node = document.createTextNode(diff[i].value);
}
fragment.appendChild(node);
}
result.textContent = "";
result.appendChild(fragment);
// Jump at start
inputs.current = 0;
next_diff();
}
window.onload = function () {
/* Convert what is options from UTC time.time() to local browser time */
var diffList = document.getElementById("diff-version");
if (typeof diffList != "undefined" && diffList != null) {
for (var option of diffList.options) {
var dateObject = new Date(option.value * 1000);
option.label = dateObject.toLocaleString();
}
}
/* Set current version date as local time in the browser also */
var current_v = document.getElementById("current-v-date");
var dateObject = new Date(newest_version_timestamp * 1000);
current_v.innerHTML = dateObject.toLocaleString();
onDiffTypeChange(
document.querySelector('#settings [name="diff_type"]:checked'),
);
changed();
};
a.onpaste = a.onchange = b.onpaste = b.onchange = changed;
if ("oninput" in a) {
a.oninput = b.oninput = changed;
} else {
a.onkeyup = b.onkeyup = changed;
}
function onDiffTypeChange(radio) {
window.diffType = radio.value;
// Not necessary
// document.title = "Diff " + radio.value.slice(4);
}
var radio = document.getElementsByName("diff_type");
for (var i = 0; i < radio.length; i++) {
radio[i].onchange = function (e) {
onDiffTypeChange(e.target);
changed();
};
}
document.getElementById("ignoreWhitespace").onchange = function (e) {
changed();
};
var inputs = document.getElementsByClassName("change");
inputs.current = 0;
function next_diff() {
var element = inputs[inputs.current];
var headerOffset = 80;
var elementPosition = element.getBoundingClientRect().top;
var offsetPosition = elementPosition - headerOffset + window.scrollY;
window.scrollTo({
top: offsetPosition,
behavior: "smooth",
});
inputs.current++;
if (inputs.current >= inputs.length) {
inputs.current = 0;
}
}

38
changedetectionio/static/js/diff.min.js vendored Normal file

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,36 @@
$(document).ready(function () {
function toggle() {
if ($('input[name="application-fetch_backend"]:checked').val() != 'html_requests') {
$('#requests-override-options').hide();
$('#webdriver-override-options').show();
} else {
$('#requests-override-options').show();
$('#webdriver-override-options').hide();
}
}
$('input[name="application-fetch_backend"]').click(function (e) {
toggle();
});
toggle();
$("#api-key").hover(
function () {
$("#api-key-copy").html('copy').fadeIn();
},
function () {
$("#api-key-copy").hide();
}
).click(function (e) {
$("#api-key-copy").html('copied');
var range = document.createRange();
var n = $("#api-key")[0];
range.selectNode(n);
window.getSelection().removeAllRanges();
window.getSelection().addRange(range);
document.execCommand("copy");
window.getSelection().removeAllRanges();
});
});

File diff suppressed because one or more lines are too long

View File

@@ -0,0 +1,56 @@
/**
* debounce
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait after the last call before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds after the last call before
* calling the original function.
*/
Function.prototype.debounce = function (milliseconds, context) {
var baseFunction = this,
timer = null,
wait = milliseconds;
return function () {
var self = context || this,
args = arguments;
function complete() {
baseFunction.apply(self, args);
timer = null;
}
if (timer) {
clearTimeout(timer);
}
timer = setTimeout(complete, wait);
};
};
/**
* throttle
* @param {integer} milliseconds This param indicates the number of milliseconds
* to wait between calls before calling the original function.
* @param {object} What "this" refers to in the returned function.
* @return {function} This returns a function that when called will wait the
* indicated number of milliseconds between calls before
* calling the original function.
*/
Function.prototype.throttle = function (milliseconds, context) {
var baseFunction = this,
lastEventTimestamp = null,
limit = milliseconds;
return function () {
var self = context || this,
args = arguments,
now = Date.now();
if (!lastEventTimestamp || now - lastEventTimestamp >= limit) {
lastEventTimestamp = now;
baseFunction.apply(self, args);
}
};
};

View File

@@ -0,0 +1,59 @@
$(document).ready(function() {
$('#add-email-helper').click(function (e) {
e.preventDefault();
email = prompt("Destination email");
if(email) {
var n = $(".notification-urls");
var p=email_notification_prefix;
$(n).val( $.trim( $(n).val() )+"\n"+email_notification_prefix+email );
}
});
$('#send-test-notification').click(function (e) {
e.preventDefault();
// this can be global
var csrftoken = $('input[name=csrf_token]').val();
$.ajaxSetup({
beforeSend: function(xhr, settings) {
if (!/^(GET|HEAD|OPTIONS|TRACE)$/i.test(settings.type) && !this.crossDomain) {
xhr.setRequestHeader("X-CSRFToken", csrftoken)
}
}
})
data = {
window_url : window.location.href,
notification_urls : $('.notification-urls').val(),
notification_title : $('.notification-title').val(),
notification_body : $('.notification-body').val(),
notification_format : $('.notification-format').val(),
}
for (key in data) {
if (!data[key].length) {
alert(key+" is empty, cannot send test.")
return;
}
}
$.ajax({
type: "POST",
url: notification_base_url,
data : data,
statusCode: {
400: function() {
// More than likely the CSRF token was lost when the server restarted
alert("There was a problem processing the request, please reload the page.");
}
}
}).done(function(data){
console.log(data);
alert('Sent');
}).fail(function(data){
console.log(data);
alert('There was an error communicating with the server.');
})
});
});

Some files were not shown because too many files have changed in this diff Show More