Hi,
I did something very similar recently. If you are trawling through a folder of files, first off you need to assess if the structure of the data is the same, if so, use glob to go through the files. Here is all the libraries I needed to import to achieve this and then below is the start of the for loop you need.
import glob
import math
import csv
import pandas as pd
import numpy as np
# set path to a variable. this is for a windows machine,
# there are two backslashes, as a single backslash is an escape key
# so the backslash is needed to show the backslash. Linux and mac
# use forward slashes so only one is used.
path = 'A:\\path_to_folder\\**\\*.csv'
# for loop using glob.
for data_path in glob.glob(path, recursive=True):
# This line will read the data into a pandas dataframe of which the parameters were:
# a csv file
# These files were separated with tabs not commas
# had to skip first 115 lines and add my own headings as the files were inconsistent
# as to where the real data started.
df = pd.read_table(csv, delim_whitespace=True, skiprows=115)
# It's up to you now what you do with that data.
This will go through each file in the given directory and will also go through all sub directories in that folder looking for any file with a .csv extension. I would google glob and to find out how to get it to work right for you, though I think this is fairly standard out of the box. Ulitmately, this was all that was required to get the data into a usable form, it's certainly a starting point.
Good luck.