Apr-23-2020, 04:54 PM
Hi,
Yes i need all the data for archive.
So its not needed to load all data every one minute update, but add only new data on top of all previous data. So it will load much faster.
I know how to use it shown in the code below where the output file has unlimited data, while the source can only have about 30 days of data.
But i dont know how to use it when the outputfile is .xlsx and has some changes you suggested and worked.
Yes i need all the data for archive.
So its not needed to load all data every one minute update, but add only new data on top of all previous data. So it will load much faster.
I know how to use it shown in the code below where the output file has unlimited data, while the source can only have about 30 days of data.
But i dont know how to use it when the outputfile is .xlsx and has some changes you suggested and worked.
import time import schedule def task1(): existingLines = set(line.strip() for line in open("C:\\Users\\Makada\\Desktop\\CR1000_Table1 - kopie.dat")) outfile = open("C:\\Users\\Makada\\Desktop\\CR1000_Table1 - kopie.dat", "a+") for content in open("C:\\Campbellsci\\LoggerNet\\CR1000_Table1.dat", "r"): if content.strip() not in existingLines: # to void duplicate lines outfile.write(content) existingLines.add(content) outfile.close() schedule.every().minute.at(":01").do(task1) while True: schedule.run_pending() time.sleep(1) refresh()