Nov-07-2019, 07:48 PM
(This post was last modified: Nov-07-2019, 07:49 PM by kozaizsvemira.)
(Nov-05-2019, 10:04 AM)karlito Wrote: I thought I got everything right till I ended up with a different file to read. Damn for 2 days I'm struggling with it. Help, please.
link to the data:file.zip
Thks for your time
# Location of all files file_folder = 'path_on_your_computer' # Save the files into a list (when more than 2) list_raw_files = [f for f in listdir(file_folder) if isfile(join(file_folder, f))] # Load the right/given file for raw_file in list_raw_files: # Check the file if raw_file.startswith('130'): #print (raw_file) temp_list = [] for chunk in pd.read_csv(file_folder + raw_file, sep = ';', header = None, chunksize = 20000, error_bad_lines = False , low_memory=False): temp_list.append(chunk) data = pd.concat(temp_list, axis = 0) del temp_list data.head(30)I have this Error/Warning:
IOPub data rate exceeded.
The notebook server will temporarily stop sending output
to the client in order to avoid crashing it.
To change this limit, set the config variable
--NotebookApp.iopub_data_rate_limit
.
Current values:
NotebookApp.iopub_data_rate_limit=1000000.0 (bytes/sec)
NotebookApp.rate_limit_window=3.0 (secs)
and the data is not complete!
link to my result: output
Try this jupyter notebook --NotebookApp.iopub_data_rate_limit=1.0e10 or try adding time.sleep(1) inside for loop