Python Forum
Best Accuracy From Loop.
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
Best Accuracy From Loop.
#1
Hi Guys,
Using Ubuntu 18.04 OS.
Pycharm Editor,

i used a youtube video to build a Linear Regression, i dont know is it possible to put the link here or it will be not possible.,
but everything is cool and works fine , only small thing not working as it is.

in loop section it suppose to find best accuracy and save the Model.
but when i print the accuracy every time i see it has only one value,i think there is an issue with it. but cant figure what is it.
and here is output and my Code:
# Numpy Library
import numpy as np
# Pandas Library
import pandas as pd
# Split The Data For Testing and Training.
from sklearn.model_selection import train_test_split
# For Fitting and Predicting The Data.
from sklearn.linear_model import LinearRegression
# Visualization of The Model.
import matplotlib.pyplot as plt
import pickle
from matplotlib import style
from sklearn.preprocessing import PolynomialFeatures
from sklearn.metrics import r2_score

student_data_path = '/home/ahmdwd/Documents/WrkCrt/Prjcts/000 Others/student/student-mat.csv'
student_data = pd.read_csv(student_data_path, sep=';')
# print(student_data.head())
# print('---------------------')

student_data = student_data[['G1', 'G2', 'G3', 'studytime', 'failures', 'absences']]
# print(student_data.head())

predict = 'G3'
X = np.array(student_data.drop([predict], 1))
y = np.array(student_data[predict])

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2)


# ### For Getting Best Accuracy and Save The Model With it.
best = 0
for _ in range(30):
    linear = LinearRegression()
    linear.fit(X_train, y_train)
    accur = linear.score(X_test, y_test)
    print(f'Accuracy Inside The Loop {accur}')
    print('-------------')

    if accur > best:
        best = accur
        with open('Student Model.pickle', 'wb') as f:
            pickle.dump(linear, f)

amw_linear = open('Student Model.pickle', 'rb')
linear_pickl = pickle.load(amw_linear)

print(f' Linear Coef_ :: {linear_pickl.coef_}')
print('-------------')
print(f'Linear Intercept :: {linear_pickl.intercept_}')

predictions = linear_pickl.predict(X_test)
for X in range(len(predictions)):
    print(f'Prediction : {predictions[X]}  ---  X_test : {X_test[X]}  ----  y_test : {y_test[X]}')
Output:
Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Accuracy Inside The Loop 0.8886248365020478 ------------- Linear Coef_ :: [ 0.14457413 0.99651608 -0.25816271 -0.17672835 0.03641823] ------------- Linear Intercept :: -1.4681428588641054 Prediction : 13.037088384513288 --- X_test : [15 13 3 2 14] ---- y_test : 13 Prediction : 12.996406154864992 --- X_test : [14 13 3 1 12] ---- y_test : 13 Prediction : 5.264809788665662 --- X_test : [7 6 1 0 0] ---- y_test : 0 Prediction : 3.946991233054696 --- X_test : [6 5 1 1 0] ---- y_test : 0 Prediction : 9.351460954989257 --- X_test : [ 9 10 3 0 9] ---- y_test : 9 Prediction : 14.58457382270452 --- X_test : [13 15 3 0 0] ---- y_test : 15 Prediction : 10.495786294728287 --- X_test : [10 11 2 0 2] ---- y_test : 11 Prediction : 10.006927829604738 --- X_test : [12 10 2 0 8] ---- y_test : 11 Prediction : 6.771181021577023 --- X_test : [ 7 7 1 0 14] ---- y_test : 5 Prediction : 14.687004876841478 --- X_test : [15 14 2 1 20] ---- y_test : 13 Prediction : -1.5336199775675003 --- X_test : [5 0 1 3 0] ---- y_test : 0 Prediction : 12.03961334191373 --- X_test : [12 12 1 0 2] ---- y_test : 14 Prediction : 16.193668278966516 --- X_test : [16 16 4 0 12] ---- y_test : 16 Prediction : 10.789198557031956 --- X_test : [13 11 2 1 3] ---- y_test : 11 Prediction : 20.028343551352126 --- X_test : [18 19 1 0 6] ---- y_test : 19 Prediction : 11.781450629022554 --- X_test : [12 12 2 0 2] ---- y_test : 11 Prediction : 8.108615211995927 --- X_test : [9 9 2 1 0] ---- y_test : 0 Prediction : 8.576689366241535 --- X_test : [9 9 2 0 8] ---- y_test : 9 Prediction : 15.131884793733565 --- X_test : [15 15 2 0 0] ---- y_test : 15 Prediction : 5.806749263528372 --- X_test : [ 8 6 2 0 18] ---- y_test : 7 Prediction : 12.83641921108445 --- X_test : [16 12 1 0 8] ---- y_test : 13 Prediction : 8.286442335242612 --- X_test : [8 9 2 0 4] ---- y_test : 10 Prediction : 14.426714521437143 --- X_test : [15 14 2 0 8] ---- y_test : 14 Prediction : 11.631443819717818 --- X_test : [13 12 3 0 1] ---- y_test : 12 Prediction : 11.552952422512291 --- X_test : [12 11 1 0 16] ---- y_test : 11 Prediction : 6.366246633894805 --- X_test : [8 7 2 0 6] ---- y_test : 9 Prediction : 8.494962834522314 --- X_test : [ 8 8 1 0 30] ---- y_test : 8 Prediction : 15.640641177627472 --- X_test : [16 15 2 0 10] ---- y_test : 15 Prediction : 8.426682556936797 --- X_test : [10 9 3 0 7] ---- y_test : 9 Prediction : 18.44156682558733 --- X_test : [19 18 3 0 0] ---- y_test : 19 Prediction : 9.282958411399012 --- X_test : [ 8 10 2 0 4] ---- y_test : 10 Prediction : -0.5387269939571088 --- X_test : [10 0 2 0 0] ---- y_test : 0 Prediction : 10.051198547049877 --- X_test : [11 11 4 0 0] ---- y_test : 11 Prediction : 15.018296209911323 --- X_test : [16 15 3 0 0] ---- y_test : 15 Prediction : 9.168271054715717 --- X_test : [10 10 3 0 0] ---- y_test : 9 Prediction : 8.332487341917764 --- X_test : [ 9 9 2 2 11] ---- y_test : 9 Prediction : 9.499270218571887 --- X_test : [10 10 2 0 2] ---- y_test : 11 Prediction : 16.897739778401885 --- X_test : [17 17 4 0 0] ---- y_test : 18 Prediction : 10.856672230970094 --- X_test : [13 11 2 0 0] ---- y_test : 10 Prediction : 4.17622614000339 --- X_test : [ 6 5 1 3 16] ---- y_test : 5 Prediction : 10.120516413426978 --- X_test : [11 10 1 0 8] ---- y_test : 10 Prediction : -1.5014657562508038 --- X_test : [4 0 1 2 0] ---- y_test : 0 Prediction : 13.846220459439294 --- X_test : [13 14 2 0 0] ---- y_test : 15 Prediction : 15.276458922802497 --- X_test : [16 15 2 0 0] ---- y_test : 15 Prediction : 10.045543600809344 --- X_test : [10 10 2 0 17] ---- y_test : 10 Prediction : 8.213605884277618 --- X_test : [8 9 2 0 2] ---- y_test : 10 Prediction : 6.945067469855498 --- X_test : [8 8 1 3 2] ---- y_test : 10 Prediction : 5.745000439039713 --- X_test : [7 7 3 0 0] ---- y_test : 8 Prediction : 19.510627183495046 --- X_test : [18 18 1 1 24] ---- y_test : 18 Prediction : 8.285343562381557 --- X_test : [9 9 2 0 0] ---- y_test : 10 Prediction : 10.8898553219389 --- X_test : [13 11 3 0 8] ---- y_test : 11 Prediction : 14.98731066466463 --- X_test : [14 15 2 0 0] ---- y_test : 15 Prediction : 8.244591429524311 --- X_test : [10 9 3 0 2] ---- y_test : 9 Prediction : 10.92950868193509 --- X_test : [13 11 2 0 2] ---- y_test : 11 Prediction : 12.258122694808712 --- X_test : [12 12 1 0 8] ---- y_test : 12 Prediction : 11.928222303813598 --- X_test : [11 12 2 0 10] ---- y_test : 13 Prediction : -0.8910150186583691 --- X_test : [7 0 1 1 0] ---- y_test : 0 Prediction : 3.5935345322834342 --- X_test : [6 5 1 3 0] ---- y_test : 0 Prediction : 5.32840397170506 --- X_test : [6 7 2 3 0] ---- y_test : 0 Prediction : 9.169369827576771 --- X_test : [ 9 10 3 0 4] ---- y_test : 10 Prediction : 14.874820853703444 --- X_test : [14 15 3 0 4] ---- y_test : 16 Prediction : 10.64036042379722 --- X_test : [11 11 2 0 2] ---- y_test : 10 Prediction : 15.131884793733565 --- X_test : [15 15 2 0 0] ---- y_test : 15 Prediction : 19.325593090782544 --- X_test : [19 19 4 0 4] ---- y_test : 20 Prediction : 5.158781527592385 --- X_test : [ 6 6 2 1 13] ---- y_test : 8 Prediction : 7.650812195328021 --- X_test : [11 8 2 0 2] ---- y_test : 8 Prediction : 5.30981720654923 --- X_test : [ 6 6 2 2 22] ---- y_test : 4 Prediction : 14.20820516854216 --- X_test : [15 14 2 0 2] ---- y_test : 14 Prediction : 15.132983566594618 --- X_test : [14 15 2 0 4] ---- y_test : 15 Prediction : 18.08068088934552 --- X_test : [16 18 3 0 2] ---- y_test : 18 Prediction : 5.370829330599459 --- X_test : [ 7 6 2 0 10] ---- y_test : 6 Prediction : 20.174016453282114 --- X_test : [18 19 1 0 10] ---- y_test : 19 Prediction : 15.349295373767493 --- X_test : [16 15 2 0 2] ---- y_test : 15 Prediction : 10.713196874762215 --- X_test : [11 11 2 0 4] ---- y_test : 11 Prediction : 12.777966705178954 --- X_test : [12 13 2 0 2] ---- y_test : 13 Prediction : -0.9724493811639124 --- X_test : [7 0 2 0 0] ---- y_test : 0 Prediction : 8.581023273616283 --- X_test : [8 9 1 0 5] ---- y_test : 9 Prediction : 9.128617694719527 --- X_test : [10 10 4 0 6] ---- y_test : 11 Prediction : 13.067114963316824 --- X_test : [14 13 2 0 2] ---- y_test : 13
Reply
#2
score return a float - a single number. See https://scikit-learn.org/stable/modules/...ssion.html
Reply
#3
(Mar-14-2020, 09:15 PM)jefsummers Wrote: score return a float - a single number. See https://scikit-learn.org/stable/modules/...ssion.html

May you please point for the line for error i need to check for,
still in learning phase. sorry.
Reply
#4
I don't see an error. Looks like it is doing what you tell it. Saves the best model, then predicts using that.
Reply
#5
The accuracy never changes because the variables X_test, X_train, y_test, and y_train are all set before the loop begins. Since they never change, the calculation will always come out the same. To change this, the for loop needs to begin earlier and those variables need to be instantiated with values that will be different each time.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  LSTM Model accuracy caps and I can't improve it celinafregoso99 1 1,951 Dec-19-2020, 01:29 PM
Last Post: jefsummers
  Increasing validation accuracy on a CNN hobbyist 4 4,092 Jun-23-2020, 01:15 PM
Last Post: hussainmujtaba
  Loss and Accuracy Figures. Hani 3 2,983 May-20-2020, 06:55 PM
Last Post: jefsummers
  Why is my train and test accuracy so low? python420 0 2,033 Dec-08-2019, 08:51 PM
Last Post: python420
  Low accuracy for fake news detection model shivani 1 2,297 Oct-10-2019, 12:09 PM
Last Post: animeshagrawal2807

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020