Files
changedetection.io/changedetectionio/diff/tokenizers/natural_text.py
dgtlmoon a748a43224
Some checks failed
Build and push containers / metadata (push) Has been cancelled
Build and push containers / build-push-containers (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Build distribution 📦 (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (alpine) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/amd64 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v7 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm/v8 (main) (push) Has been cancelled
ChangeDetection.io Container Build Test / Build linux/arm64 (main) (push) Has been cancelled
ChangeDetection.io App Test / lint-code (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Test the built package works basically. (push) Has been cancelled
Publish Python 🐍distribution 📦 to PyPI and TestPyPI / Publish Python 🐍 distribution 📦 to PyPI (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-10 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-11 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-12 (push) Has been cancelled
ChangeDetection.io App Test / test-application-3-13 (push) Has been cancelled
"History" page - Use faster server side "difference" rendering, show ignored/triggered rows (#3442)
2025-12-15 15:39:07 +01:00

45 lines
1.0 KiB
Python

"""
Simple word tokenizer using whitespace boundaries.
This is a simpler tokenizer that treats all whitespace as token boundaries
without special handling for HTML tags or other markup.
"""
from typing import List
def tokenize_words(text: str) -> List[str]:
"""
Split text into words using simple whitespace boundaries.
This is a simpler tokenizer that treats all whitespace as token boundaries
without special handling for HTML tags.
Args:
text: Input text to tokenize
Returns:
List of tokens (words and whitespace)
Examples:
>>> tokenize_words("Hello world")
['Hello', ' ', 'world']
>>> tokenize_words("one two")
['one', ' ', ' ', 'two']
"""
tokens = []
current = ''
for char in text:
if char.isspace():
if current:
tokens.append(current)
current = ''
tokens.append(char)
else:
current += char
if current:
tokens.append(current)
return tokens