Bottom Page

Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
 Executable not working (Anaconda, Win7 32 bit)
#1
I wrote a piece of code in Python using Anaconda 32 bit.
The code gets the log page from the web server of 17 satellite receivers.


The code in jupyter works fine, the executable does not.

Any idea?

İmage
Quote
#2
What do you mean by "the executable"? Can you provide your code?
Quote
#3
I made an exe file and I got the error message of the picture above. The .py file works.



The primary procedure connects to a professional satellite receiver's web interface and gets the alarmlog html page. I save the page, then I filter it in order to keep only the messages I am interested in.

The primary procedure is called 17 times (we have 17 satellite receivers of the same model).


I have to warn you, I am not expert in coding.

There are many records in each receiver, I had to increase the recursive limit from 1.000 to 5.000.

#Newtec scrapping
#Fourth code, 16/01/2019



import win32api#d =  win32api.GetVolumeInformation("C:\\")

import urllib
import urllib.request
from bs4 import BeautifulSoup
import os
import sys

sys.setrecursionlimit(5000)
OutFileName="out0002.txt"
NeedSaveFile=False

def make_soup(url):
    thepage = urllib.request.urlopen(url)
    soupdata = BeautifulSoup(thepage,"html.parser")
    return soupdata

def inplace_change(filename, old_string, new_string):
    # Safely read the input filename using 'with'
    with open(filename) as f:
        s = f.read()
        if old_string not in s:
            #print ('"{old_string}" not found in {filename}.'.format(**locals()))
            return
    with open(filename, 'w') as f:
        print ('Changing "{old_string}" to "{new_string}" in {filename}'.format(**locals()))
        s = s.replace(old_string, new_string)
        f.write(s)


def PrimaryScrappingProcedure(IRDip,IRDlocation,IRDcarrier,IRDfilename,IRD_ID):
    AddressLog="http://" + IRDip + "/alarmlog.html"
    soup = make_soup(AddressLog)
   
    #I had some issues with scrapping directly from the URL
    #First save in file and then scrape the file
    html = soup.prettify()  #bs is your BeautifulSoup object

    
    FullIRDfilename = 'Full_' + IRDfilename
    with open(FullIRDfilename,"w") as out:
        for i in range(0, len(html)):
            try:
                out.write(html[i])
            except Exception:
                1+1

    inplace_change(FullIRDfilename,chr(10),"")
    inplace_change(FullIRDfilename,"<html> <head>  <title>   ALARMLOG  </title> </head> <body>  <pre>                                                                                                                                               <br/>","")
    inplace_change(FullIRDfilename,"Start ","\nStart ")
    inplace_change(FullIRDfilename,"Stop ","\nStop ")

    inplace_change(FullIRDfilename,"<span style=\"color:#2E2EFE\">","")
    inplace_change(FullIRDfilename,"<span style=\"color:#088A08\">","")
    inplace_change(FullIRDfilename,"<span style=\"color:#D2000E\">","")
    inplace_change(FullIRDfilename,"<font color=\"black\">","")

    inplace_change(FullIRDfilename,"Device reset flag                                                                                        ","Device reset flag")
    inplace_change(FullIRDfilename,"ALARM Reference clock                                                                  ","ALARM Reference clock")
    inplace_change(FullIRDfilename,"INFO  System Started                                                                                           ","INFO  System Started")
    inplace_change(FullIRDfilename,"ALARM Reference clock                        ","ALARM Reference clock")
    inplace_change(FullIRDfilename,"ALARM Interface                                                                                                ","ALARM Interface")


    inplace_change(FullIRDfilename,"Demod BB sync                                                                                            ","Demod BB sync")
    inplace_change(FullIRDfilename,"ALARM Demod PL sync                                                                                            ","ALARM Demod PL sync")

    inplace_change(FullIRDfilename,"ALARM Demod lock                                                                                               ","ALARM Demod lock")


    inplace_change(FullIRDfilename,"","")
    inplace_change(FullIRDfilename,"</pre>","")
    inplace_change(FullIRDfilename,"</pre","")
    inplace_change(FullIRDfilename,"<br/>","")
    inplace_change(FullIRDfilename,"> </body>","")
    inplace_change(FullIRDfilename,"</span>","")
    inplace_change(FullIRDfilename,"</font>","")
    inplace_change(FullIRDfilename,"</html> ","")


    inplace_change(FullIRDfilename,"################### CURRENT POSITION ###############","")
    
    
    UsefulLogPhrases = ['ALARM Demod lock'] #Example['abc', 'def', 'ghi', 'jkl']

    with open(FullIRDfilename) as FirstFile, open('DemodOnly_' + IRDfilename, 'w') as SecondFile:#Code taken from: https://stackoverflow.com/questions/11968998/remove-lines-that-contain-certain-string
        for line in FirstFile:
            Acceptable = False
            for word in UsefulLogPhrases:
                if word in line:
                    Acceptable = True
            if Acceptable == True:
                prosorino= IRDlocation + ', ' + IRDcarrier +  ', '   + line
                SecondFile.write(prosorino)
    
    
##################################################
### def PrimaryScrappingProcedure Ends here    ###
##################################################




#Main starts here
d =  win32api.GetVolumeInformation("C:\\")

if d[1] ==477085405:
    PrimaryScrappingProcedure("172.18.0.100","KTZ","tp55","tp55_KTZ.txt","KTZ_tp55")
    PrimaryScrappingProcedure("172.18.0.101","KTZ","tp73","tp73_KTZ.txt","KTZ_tp73")
    PrimaryScrappingProcedure("172.18.0.102","KTZ","tp61","tp61_KTZ.txt","KTZ_tp61")
    PrimaryScrappingProcedure("172.18.0.103","KTZ","tp71","tp71_KTZ.txt","KTZ_tp71")
    PrimaryScrappingProcedure("172.18.0.104","KTZ","SkyIT","SkyIT_KTZ.txt","KTZ_SkyIT")
    PrimaryScrappingProcedure("172.18.0.105","KTZ","Globe","Globe_KTZ.txt","KTZ_Globe")



    PrimaryScrappingProcedure("172.18.0.110","Thess","tp55","tp55_Thess.txt","Thess_tp55")
    PrimaryScrappingProcedure("172.18.0.111","Thess","tp73","tp73_Thess.txt","Thess_tp73")
    PrimaryScrappingProcedure("172.18.0.112","Thess","tp61","tp61_Thess.txt","Thess_tp61")
    PrimaryScrappingProcedure("172.18.0.113","Thess","tp71","tp71_Thess.txt","Thess_tp71")
    PrimaryScrappingProcedure("172.18.0.114","Thess","SkyIT","SkyIT_Thess.txt","Thess_SkyIT")
    PrimaryScrappingProcedure("172.18.0.115","Thess","Globe","Globe_Thess.txt","Thess_Globe")



    PrimaryScrappingProcedure("172.18.0.120","Hera","tp55","tp55_Hera.txt","Hera_tp55")
    PrimaryScrappingProcedure("172.18.0.121","Hera","tp73","tp73_Hera.txt","Hera_tp73")
    PrimaryScrappingProcedure("172.18.0.122","Hera","tp61","tp61_Hera.txt","Hera_tp61")
    PrimaryScrappingProcedure("172.18.0.123","Hera","tp71","tp71_Hera.txt","Hera_tp71")
    PrimaryScrappingProcedure("172.18.0.124","Hera","SkyIT","SkyIT_Hera.txt","Hera_SkyIT")


else:
    print (d[1])
Quote
#4
Any help?
Quote
#5
It seems to a memory error but... ... there are two things.

1. Executable does not work, running the code from jupyter works fine.

2. the size of the files is not big
Quote

Top Page

Possibly Related Threads...
Thread Author Replies Views Last Post
  seleniem PhantomJS stops working, while Chome still working metulburr 7 1,298 Feb-04-2018, 02:51 PM
Last Post: metulburr
  Getting error 'geckodriver' executable needs to be in PATH. sumandas89 2 23,705 Jan-11-2018, 07:54 AM
Last Post: sumandas89

Forum Jump:


Users browsing this thread: 1 Guest(s)