This is a static dump of issues in the old "Flyspray" bugtracker for DokuWiki. Bugs and feature requests
are now tracked at the issue tracker at Github.
Closed
Fixed
FS#2049 Do not interpret Soft hyphens as word boundary in indexer
UTF-8/Unicode
2010-10-06adrianlang
Currently, the soft hyphen char [1] is interpreted as word boundary by idx_tokenizer. This happens in the utf8_stripspecials call where special chars are replaced by whitespace. We should completely remove the soft hyphen instead.
4f0030dd should fix this. However search highlighting does not work. I doubt it is possible. Unless someone has an idea I'll close this one as fixed soon.