It's not so easy because you need to know how csv.reader works and you need to convert str into float for example.
Otherwise you can't do the comparison.
Here an example with:
Code:
But working directly with Excel will solve your task faster (currently). Later if you more experienced, this code is nothing difficult to write.
EDIT: Added the sorting. It's not possible to use this method, if the file is the half of your RAM.
Otherwise you can't do the comparison.
Here an example with:
- skipping first line before using csv.DictReader
- DictReader gets automatically the first list (after skipping, it's the second line) for the column names, which are later the keys
- reading two rows, where the first one should be the minimum limits and the second the maximum limits
- converting this
str
to afloat
and putting them as attribute on the instance of Reader
- iterate through the remaining data and using the methods
reader.test_pass(row)
andreader.in_valid_range(row)
Code:
import csv from operator import itemgetter class Reader: def __init__(self, csv_file, sort_fields): self.fd = open(csv_file, newline="") self.sort_fields = itemgetter(*sort_fields) # self.sort_fields(row) -> return a list # with the wanted data from the mapping # skipping first line next(self.fd) # DictReader reads automatically the first line # to get the column names self.csv = csv.DictReader(self.fd, delimiter=",") # getting minium and maximum values self.minimum_values = next(self.csv) self.maximum_values = next(self.csv) # getting the limit values from the dict and converting them to a float # that they could used for comparison self.vdd_continuity_min = float(self.minimum_values["VDD Continuity"]) self.vdd_continuity_max = float(self.maximum_values["VDD Continuity"]) self.lr_continuity_min = float(self.minimum_values["LR Continuity"]) self.lr_continuity_max = float(self.maximum_values["LR Continuity"]) @staticmethod def sort_by_float(row): # calling self.sort_fields with row -> data # map takes each element from data and call it with float() # tuple consumes the map # a tuple with floats is returned return tuple(map(float, self.sort_fields(row))) def __iter__(self): yield from self.csv def __enter__(self): return self def __exit__(self, exc_typ, exc_obj, exc_tb): # closes files automatically when leaving # the context manager self.fd.close() @staticmethod def test_pass(row): # no instance (self) is needed to do the test return row.get("Pass/Fail", "").lower() == "passed" def in_valid_range(self, row): # here self is needed, because attributes from this # instance are read vdd = float(row["VDD Continuity"]) lr = float(row["LR Continuity"]) return (self.vdd_continuity_min <= lr <= self.vdd_continuity_max) and ( self.lr_continuity_min <= lr <= self.lr_continuity_max ) if __name__ == "__main__": with Reader("Log.csv", sort_fields=["VDD Continuity", "Temperature"]) as reader: # reader instance has now some attributes print(f"{reader.vdd_continuity_min=}") print(f"{reader.vdd_continuity_max=}") print(f"{reader.lr_continuity_min=}") print(f"{reader.lr_continuity_max=}") # calls implicity the __iter__ method of the instance of Reader for row in reader: if reader.test_pass(row) and reader.in_valid_range(row): part_id = row.get("Part ID") if part_id: print(part_id)The output could be made with: https://www.geeksforgeeks.org/working-wi...in-python/
But working directly with Excel will solve your task faster (currently). Later if you more experienced, this code is nothing difficult to write.
EDIT: Added the sorting. It's not possible to use this method, if the file is the half of your RAM.
Almost dead, but too lazy to die: https://sourceserver.info
All humans together. We don't need politicians!
All humans together. We don't need politicians!