Python Forum
IndexError in Array while trying to do machine learning - Printable Version

+- Python Forum (https://python-forum.io)
+-- Forum: Python Coding (https://python-forum.io/forum-7.html)
+--- Forum: Data Science (https://python-forum.io/forum-44.html)
+--- Thread: IndexError in Array while trying to do machine learning (/thread-30895.html)



IndexError in Array while trying to do machine learning - Mariaoye - Nov-12-2020

Hi All,

I am trying to predict income(70000+) based on specific categorical fields (Sex and Highest Cert,dip,deg) based on python code below.

I created a range for the average income and then specified the specific range of income(70000+) I wanted to predict using
(Sex and Highest Cert,dip,deg)

I have the following code. However, I get an error when I get to the One hot encoding part of the code. I am using python on visual studio. I have tried changing the categorical field to "Age", but it does not work. The code is below.

# %% read dataframe from part1
import pandas as pd

df = pd.read_pickle("data.pkl")

#%%
import numpy as np
bins = [0, 30000, 50000, 70000, 100000, np.inf]
names = ['<30000', '30000-50000', '50000-70000', '70000-100000', '100000+']

df['Avg Emp Income Range'] = pd.cut(df['Avg Emp Income'], bins, labels=names)

#%% OHE of Avg empl income
for val in df["Avg Emp Income Range"].unique():
    df[f"Avg Emp Income Range_{val}"] = df["Avg Emp Income Range"] == val

#%% selecting data
x= ["Sex","Highest Cert,dip,deg"]

#%%
success=["Avg Emp Income Range_70000-100000","Avg Emp Income Range_100000+"]
y=success

# %% split into training / testing sets
from sklearn.model_selection import train_test_split

x_train, x_test, y_train, y_test = train_test_split(x, y, random_state=123)

#%%
from sklearn.compose import ColumnTransformer
from sklearn.impute import SimpleImputer
from sklearn.preprocessing import OneHotEncoder
import numpy as np
from sklearn.metrics import accuracy_score, f1_score, precision_score, recall_score

enc = OneHotEncoder(handle_unknown="ignore")
ct = ColumnTransformer(
    [
        ("ohe", enc, ["Sex","Highest Cert,dip,deg",])
    ],
    remainder="passthrough",
)

x_train = ct.fit_transform(x_train)
x_test = ct.transform(x_test)
Error:
--------------------------------------------------------------------------- ValueError Traceback (most recent call last) c:\Users\maria\Documents\Project Capstone 2\Z NO\machine L.py in 42 ) 43 ---> 44 x_train = ct.fit_transform(x_train) 45 x_test = ct.transform(x_test) c:\Users\maria\Documents\Project Capstone 2\Z NO\venv\lib\site-packages\sklearn\compose\_column_transformer.py in fit_transform(self, X, y) 522 else: 523 self._feature_names_in = None --> 524 X = _check_X(X) 525 # set n_features_in_ attribute 526 self._check_n_features(X, reset=True) c:\Users\maria\Documents\Project Capstone 2\Z NO\venv\lib\site-packages\sklearn\compose\_column_transformer.py in _check_X(X) 649 if hasattr(X, '__array__') or sparse.issparse(X): 650 return X --> 651 return check_array(X, force_all_finite='allow-nan', dtype=np.object) 652 653 c:\Users\maria\Documents\Project Capstone 2\Z NO\venv\lib\site-packages\sklearn\utils\validation.py in inner_f(*args, **kwargs) 70 FutureWarning) 71 kwargs.update({k: arg for k, arg in zip(sig.parameters, args)}) ---> 72 return f(**kwargs) 73 return inner_f 74 c:\Users\maria\Documents\Project Capstone 2\Z NO\venv\lib\site-packages\sklearn\utils\validation.py in check_array(array, accept_sparse, accept_large_sparse, dtype, order, copy, force_all_finite, ensure_2d, allow_nd, ensure_min_samples, ensure_min_features, estimator) 621 "Reshape your data either using array.reshape(-1, 1) if " 622 "your data has a single feature or array.reshape(1, -1) " --> 623 "if it contains a single sample.".format(array)) 624 625 # in the future np.flexible dtypes will be handled like object dtypes ValueError: Expected 2D array, got 1D array instead: array=['Sex']. Reshape your data either using array.reshape(-1, 1) if your data has a single feature or array.reshape(1, -1) if it contains a single sample.
Please what am I doing wrong?

Thank you.