Aug-31-2021, 08:55 AM
Skaperen Wrote:so i am wondering what tokenize.tokenize() put in there that is non-ASCII enough to get up to Unicodes that are multi-byte UTF-8.I think I would first investigate this point thoroughly to determine exactly what happened. This is a case where the error can be fully exposed and understood, why not do this?