Hello to all,
Happy new year and good health!!
I am sure this is super simple, but I can't find the right way to do it.
I have a folder with some csv files.
All csv files have the same structure, same number of columns, similar data but the number of rows are different.
I want to regroup all that data in one dataframe to make a single CSV.
I have the following code.
Help will be extremely appreciated.
Happy new year and good health!!
I am sure this is super simple, but I can't find the right way to do it.
I have a folder with some csv files.
All csv files have the same structure, same number of columns, similar data but the number of rows are different.
I want to regroup all that data in one dataframe to make a single CSV.
I have the following code.
import pandas as pd import glob path = r"path\to\my\folder\*.csv" df2 = pd.DataFrame(columns=['Name','Reading', 'Office','Phone','address' ]) csvfiles = [] a=0 for file in glob.glob(path): csvfiles.append(file) # print(file) for csvnub in csvfiles: df=pd.read_csv(csvnub) count_row = df.shape[0] print(count_row) df2.iloc[a] = df.iloc[count_row] a=a+1 print(df2) df2.to_csv("InfosTotal.csv", index=True, encoding="utf_8_sig")I tried append()...
for csvnub in csvfiles: df=pd.read_csv(csvnub) count_row = df.shape[0] print(count_row) for row in count_row: df2.append(row)or
df2.append(df, ignore_index = False)I tried concat() also...
Help will be extremely appreciated.