Jun-01-2020, 06:28 AM
hi,this is my fist post here. i hope is the right subcategory
I have already trained an lstm model for activity recognition task and now I want to build an API so when someone the api gets the category back. I have 6 categories. I used to load the data locally so my supervisor told me that is a mistake and that the data must come via the internet--via JSON
so I am following the next steps:
train the model
save the model with h5
load the model
make the preprocess step
make the prediction
after the execution, I do not take back any category and I take back key error: 'activity'.
so I am wondering what is my mistake here? In fact, my main problem is that I do not really understand the concept. What is mean that I take a category back? the code is below
I have already trained an lstm model for activity recognition task and now I want to build an API so when someone the api gets the category back. I have 6 categories. I used to load the data locally so my supervisor told me that is a mistake and that the data must come via the internet--via JSON
so I am following the next steps:
train the model
save the model with h5
load the model
make the preprocess step
make the prediction
after the execution, I do not take back any category and I take back key error: 'activity'.
so I am wondering what is my mistake here? In fact, my main problem is that I do not really understand the concept. What is mean that I take a category back? the code is below
from flask import Flask, request, jsonify#--jsonify will return the data from keras.models import load_model from sklearn.preprocessing import StandardScaler, LabelEncoder import scipy.stats as stats import numpy as np import pandas as pd # initialize our Flask application and the Keras model app = Flask(__name__) model = None def load_model(): global model model=load_model('flask_model.h5') @app.route("/predict", methods=["GET", "POST"]) def index(): data = request.json df_final = pd.DataFrame(data, index=[0]) df_final.dropna(how='any', inplace=True) label = LabelEncoder() df_final['label'] = label.fit_transform(df_final['activity']) df_final.head() X = df_final[['accx', 'accy', 'accz', 'gyrx', 'gyry', 'gyrz']] #feature space y = df_final['label'] #output scaler = StandardScaler() X = scaler.fit_transform(X) df_final = pd.DataFrame(X, columns = ['accx', 'accy', 'accz', 'gyrx', 'gyry', 'gyrz']) df_final['label'] = y.values Fs = 50 frame_size = Fs*2 hop_size = frame_size def get_frames(df_final, frame_size, hop_size): N_FEATURES = 6 frames = [] labels = [] for i in range(0, len(df_final) - frame_size, hop_size): accx = df_final['accx'].values[i: i + frame_size] accy = df_final['accy'].values[i: i + frame_size] accz = df_final['accz'].values[i: i + frame_size] gyrx = df_final['gyrx'].values[i: i + frame_size] gyry = df_final['gyry'].values[i: i + frame_size] gyrz = df_final['gyrz'].values[i: i + frame_size] # Retrieve the most often used label in this segment label = stats.mode(df_final['label'][i: i + frame_size])[0][0] frames.append([accx, accy, accz, gyrx, gyry, gyrz]) labels.append(label) # Bring the segments into a better shape frames = np.asarray(frames).reshape(-1, frame_size, N_FEATURES) labels = np.asarray(labels) return frames, labels X, y = get_frames(df_final, frame_size, hop_size) X = np.array(X).flatten() y = np.array(y).flatten() predicted_category = expm1(prediction.flatten()[0]) return jsonify({"category": str(predicted_category)}) # def predict(): # pred = model.predict(np.array(X).tolist()).tolist() # return jsonify({"Prediction": pred}), 200 if __name__ == '__main__': app.run(host="localhost", port=8000, debug=False, threaded=False)