Nov-06-2019, 09:25 AM
UnicodeDecodeError
occurs, if the source file can't be decoded from utf8, which is the default encoding.The function pd.read_csv does not seem to have a kwarg to ignore encoding errors.
One way could be to open the file in TextMode and pass the fd to pandas.
with open("G:\\Analyser\\2019 OS\\test.csv", errors='ignore' ) as fd: data = pd.read_csv(fd, header=None, error_bad_lines=False)Take a look into the documentation about pd.read_csv.
This is the constructor:
pandas.read_csv(filepath_or_buffer, sep=', ', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=None, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, doublequote=True, delim_whitespace=False, low_memory=True, memory_map=False, float_precision=None)[source]The first argument
filepath_or_buffer
is described as:Quote:filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any \
object with a read() method (such as a file handle or StringIO)
The string could be a URL. Valid URL schemes include http, ftp, s3, and file. For file URLs, a host is expected. For instance, a local file could be file://localhost/path/to/table.csv
I haven't tested the upper example, but it should work. In this case errors are ignored.
I guess the file you have, is in a different encoding as utf8.
It could be:
- latin1 (ISO/IEC 8859-1)
- latin9 (ISO/IEC 8859-15)
- Windows-1252 (CP 1252 / (Western European) / ANSI)
There is also a modules called
ftfy
which can solve bad encoding errors.import ftfy with open('file_with_bad_encoding.txt', errors='ignore') src: fixed_text = ftfy.fix_text(src.read()) with open('file_with_fixed_encoding.txt', 'w') as dst: dst.write(fixed_text)After this, the file is using utf8 as encoding and the most errors from wrong encoding/decoding should be fixed.
To know the right encoding of an input file is better.
Almost dead, but too lazy to die: https://sourceserver.info
All humans together. We don't need politicians!
All humans together. We don't need politicians!