mirror of
https://github.com/dgtlmoon/changedetection.io.git
synced 2026-01-31 11:26:01 +00:00
Multi-language / Translations Support (#3696) - Complete internationalization system implemented - Support for 7 languages: Czech (cs), German (de), French (fr), Italian (it), Korean (ko), Chinese Simplified (zh), Chinese Traditional (zh_TW) - Language selector with localized flags and theming - Flash message translations - Multiple translation fixes and improvements across all languages - Language setting preserved across redirects Pluggable Content Fetchers (#3653) - New architecture for extensible content fetcher system - Allows custom fetcher implementations Image / Screenshot Comparison Processor (#3680) - New processor for visual change detection (disabled for this release) - Supporting CSS/JS infrastructure added UI Improvements Design & Layout - Auto-generated tag color schemes - Simplified login form styling - Removed hard-coded CSS, moved to SCSS variables - Tag UI cleanup and improvements - Automatic tab wrapper functionality - Menu refactoring for better organization - Cleanup of offset settings - Hide sticky tabs on narrow viewports - Improved responsive layout (#3702) User Experience - Modal alerts/confirmations on delete/clear operations (#3693, #3598, #3382) - Auto-add https:// to URLs in quickwatch form if not present - Better redirect handling on login (#3699) - 'Recheck all' now returns to correct group/tag (#3673) - Language set redirect keeps hash fragment - More friendly human-readable text throughout UI Performance & Reliability Scheduler & Processing - Soft delays instead of blocking time.sleep() calls (#3710) - More resilient handling of same UUID being processed (#3700) - Better Puppeteer timeout handling - Improved Puppeteer shutdown/cleanup (#3692) - Requests cleanup now properly async History & Rendering - Faster server-side "difference" rendering on History page (#3442) - Show ignored/triggered rows in history - API: Retry watch data if watch dict changed (more reliable) API Improvements - Watch get endpoint: retry mechanism for changed watch data - WatchHistoryDiff API endpoint includes extra format args (#3703) Testing Improvements - Replace time.sleep with wait_for_notification_endpoint_output (#3716) - Test for mode switching (#3701) - Test for #3720 added (#3725) - Extract-text difference test fixes - Improved dev workflow Bug Fixes - Notification error text output (#3672, #3669, #3280) - HTML validation fixes (#3704) - Template discovery path fixes - Notification debug log now uses system locale for dates/times - Puppeteer spelling mistake in log output - Recalculation on anchor change - Queue bubble update disabled temporarily Dependency Updates - beautifulsoup4 updated (#3724) - psutil 7.1.0 → 7.2.1 (#3723) - python-engineio ~=4.12.3 → ~=4.13.0 (#3707) - python-socketio ~=5.14.3 → ~=5.16.0 (#3706) - flask-socketio ~=5.5.1 → ~=5.6.0 (#3691) - brotli ~=1.1 → ~=1.2 (#3687) - lxml updated (#3590) - pytest ~=7.2 → ~=9.0 (#3676) - jsonschema ~=4.0 → ~=4.25 (#3618) - pluggy ~=1.5 → ~=1.6 (#3616) - cryptography 44.0.1 → 46.0.3 (security) (#3589) Documentation - README updated with viewport size setup information Development Infrastructure - Dev container only built on dev branch - Improved dev workflow tooling
62 lines
1.7 KiB
Python
62 lines
1.7 KiB
Python
"""
|
|
Tokenizer that preserves HTML tags as atomic units while splitting on whitespace.
|
|
|
|
This tokenizer is specifically designed for HTML content where:
|
|
- HTML tags should remain intact (e.g., '<p>', '<a href="...">')
|
|
- Whitespace tokens are preserved for accurate diff reconstruction
|
|
- Words are split on whitespace boundaries
|
|
"""
|
|
|
|
from typing import List
|
|
|
|
|
|
def tokenize_words_and_html(text: str) -> List[str]:
|
|
"""
|
|
Split text into words and boundaries (spaces, HTML tags).
|
|
|
|
This tokenizer preserves HTML tags as atomic units while splitting on whitespace.
|
|
Useful for content that contains HTML markup.
|
|
|
|
Args:
|
|
text: Input text to tokenize
|
|
|
|
Returns:
|
|
List of tokens (words, spaces, HTML tags)
|
|
|
|
Examples:
|
|
>>> tokenize_words_and_html("<p>Hello world</p>")
|
|
['<p>', 'Hello', ' ', 'world', '</p>']
|
|
>>> tokenize_words_and_html("<a href='test.com'>link</a>")
|
|
['<a href=\\'test.com\\'>', 'link', '</a>']
|
|
"""
|
|
tokens = []
|
|
current = ''
|
|
in_tag = False
|
|
|
|
for char in text:
|
|
if char == '<':
|
|
# Start of HTML tag
|
|
if current:
|
|
tokens.append(current)
|
|
current = ''
|
|
current = '<'
|
|
in_tag = True
|
|
elif char == '>' and in_tag:
|
|
# End of HTML tag
|
|
current += '>'
|
|
tokens.append(current)
|
|
current = ''
|
|
in_tag = False
|
|
elif char.isspace() and not in_tag:
|
|
# Space outside of tag
|
|
if current:
|
|
tokens.append(current)
|
|
current = ''
|
|
tokens.append(char)
|
|
else:
|
|
current += char
|
|
|
|
if current:
|
|
tokens.append(current)
|
|
return tokens
|