Python Forum

Full Version: Reduce four for loops or parallelizing code in Python
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have this code that I have been working and creating data based on my actual data. I am using pandas and Python. Here is how my code looks like:
new_df = pd.DataFrame(columns=['dates', 'Column_D', 'Column_A', 'VALUE', 'Column_B', 'Column_C'])
for i in df["dates"].unique():
    for j in df["Column_A"].unique():
        for k in df["Column_B"].unique():
              for m in df["Column_C"].unique():
                    n = df[(df["Column_D"] == 'orange') & (df["dates"] == '2005-1-1') & (df["Column_A"] == j) & (df["Column_B"] == k) & (df["Column_C"] == m)]['VALUE']
                    x = df[(df["dates"] == '2005-1-1') & (df["Column_A"] == j) & (df["Column_B"] == k) & (df["Column_C"] == m)]['VALUE'].sum()
                    tempVal = df[(df["dates"] == i)  & (df["Column_A"] == j) & (df["Column_B"] == k) & (df["Column_C"] == m)]['VALUE'].agg(sum)
                    finalVal = (n * tempVal) / (x - n)
                    if finalVal.empty | finalVal.isnull().values.any() | finalVal.isna().values.any() | np.inf(finalVal).values.any():
                       finalVal = 0
                    finalVal = int(finalVal)

                    new_df = new_df.append({'dates': i, 'Column_D': 'orange', 'Column_A': j, 'VALUE': finalVal, 'Column_B': k, 'Column_C': m}, ignore_index=True)
It takes a long time for my code to run right now and I'm not sure how to fix it and reduce the speed. I suspect the code is written sequentially. Could I get some help to reduce the speed? I want to know how to write my code in parallel and reduce the number of for loops. I heard pyspark is good, but will it help me? Thanks!
print contents of i, j, k and m at the beginning of each loop.

This will show you how many times each sub-loop has to spin through, where all the time is spent.
keep in mind that each sub-loop has to spin as many times as instructed by it's parent, at each level, so by the time
you get to Column 3 the number of iterations is massive.

This must be avoided.

In order to take a stab at correcting this. you will need to provide a copy of your df at start and explain the purpose of new_df.