Python Forum
Reading a text file - Printable Version

+- Python Forum (https://python-forum.io)
+-- Forum: Python Coding (https://python-forum.io/forum-7.html)
+--- Forum: Homework (https://python-forum.io/forum-9.html)
+--- Thread: Reading a text file (/thread-5580.html)



Reading a text file - fivestar - Oct-11-2017

I have a text file of letters on 8 different lines. I want to read through the file and print out the length of every 2nd line of that file and store it into a list.
for example the text file would look like :
"abcd"
"efgh"
"ijkl"
"mnop"
I am trying to have the output be [4,4]. It will only read every 2n line and store its length while ignoring the other lines. this is what i have so far.


def read_seq(file_name,num_seq):
    file=open(file_name,"r")
    file=file.read()
    count =0
    for x in file:
        print(x)
        count +=1



RE: Reading a text file - Larz60+ - Oct-12-2017

you didn't use num_seq, so I removed it.
this one will start on line line 1 which is second line (n starts at 0)

def read_seq(file_name):
    with open(file_name, 'r') as f:
        mydata = f.readlines()
        for n, x in enumerate(mydata):
            if (n+1) % 2 == 0 and n != 0:
                print('{}, {}'.format(n, len(x)))

if __name__ == '__main__':
    read_seq('ziggy.txt')



RE: Reading a text file - wavic - Oct-12-2017

Another way to cound every second line

for num, line in enumerate(file_obj, 1):
    if num & 1: # check if num is odd
        pass
    else:
        print(len(line))



RE: Reading a text file - DeaD_EyE - Oct-12-2017

Just as a joke, but you can still learn about this.
Don't use this code.

with open(file_name) as fd:
    lineiterator = iter(fd)
    for line in lineiterator:
        print(line.strip())
        with contextlib.suppress(StopIteration):
            next(lineiterator)
fd is the file object.

The function iter(fd) makes an iterator from the file object.
This works only, when the file has been opened in text mode. Calling iter on a file object,
does the same what the for-loop does. The call next(lineiterator) jumps to the next line.
As you can see, there is no use of the returned data of next(lineiterator).
contextlib.suppress is just a context manager to suppress errors. In this case
StopIteration is suppressed. If this Exception StopIteration happend,
the end of file has been reached. The for-loop does get this Exception also and stops silently.


RE: Reading a text file - gruntfutuk - Oct-12-2017

How about: (using Python 3)

from itertools import islice
with open(file_name) as fd:
    alternatelines = islice(fd, 0, None, 2)
    for line in alternatelines:
        print(len(line.rstrip()))
Same principle as above, but uses islice from itertools to step through the file by 2 lines at a time. (I used rstrip on assumption that each line has a newline character at the end.)


RE: Reading a text file - DeaD_EyE - Oct-12-2017

Oh, yes. Your approach is much better :-)
Itertools is very powerful.


RE: Reading a text file - snippsat - Oct-12-2017

gruntfutuk Wrote:How about: (using Python 3)
Now do itertools also work for Python 2,and i guess he use Python 3 bye his use of print function.
itertools.islice is cool @gruntfutuk,but for this i had gone for a more simplistic approach as shown in other post with enumerate.
with open('in.txt') as f:
    for count,line in enumerate(f, 1):
        if count % 2:
            print(len(line.strip()))



RE: Reading a text file - gruntfutuk - Oct-13-2017

(Oct-12-2017, 08:38 PM)snippsat Wrote:
gruntfutuk Wrote:How about: (using Python 3)
Now do itertools also work for Python 2,and i guess he use Python 3 bye his use of print function.
itertools.islice is cool @gruntfutuk,but for this i had gone for a more simplistic approach as shown in other post with enumerate.
with open('in.txt') as f:
    for count,line in enumerate(f, 1):
        if count % 2:
            print(len(line.strip()))

Your approach is best @snippsat, I was just trying to provide a simpler iter alternative to what @DeaD_EyE suggested as an approach. As the source file is so short, it is all pretty academic. Were the file large, we would be looking for the most efficient approach, avoiding reading anything we don't need to.