Python Forum
I need a serial device expert!
Thread Rating:
  • 2 Vote(s) - 3 Average
  • 1
  • 2
  • 3
  • 4
  • 5
I need a serial device expert!
#1
Hello! I am very new to python and possibly just as new to information flow from devices to computers. The only thing I know is that everything is sent in 0's and 1's. What I don't know is how or where the information is altered into hex or text or numbers we see on the screen.

What I'm trying to to do is capture the data stream coming from a sensor over a RS232 to USB cable (serial). This stream is constantly sent by the device as it actively measures at 100 HZ. I have had some luck in recording the incoming data and even decoding it:

______________________

Code:

import datetime
import serial

ser = serial.Serial()
ser.port = 'COM3'
ser.baudrate =9600#460800
ser.timeout = 0.02
ser.bytesize = 8
ser.stopbits = 1

ser.open()

f = open('asciitest.txt', 'w')

while 1:
    line = ser.readline()
    time = datetime.datetime.now()
    try:
        print(str(time) + '    ' + str(line.replace(b'\r\n', b'').decode('utf-8')) + '\n')
        f.write(str(time) + '    ' + str(line.replace(b'\r\n', b'').decode('utf-8')) + '\n')
    except UnicodeDecodeError:
        continue
Data Result:

2018-12-04 16:59:17.806247 %RAWIMUSA,0,16425.970;0,16425.963510000,00250001,491,32,16,19,-13,24*1ac8f7bf
2018-12-04 16:59:17.807748 %RAWIMUSA,0,16425.980;0,16425.973518000,00250001,492,32,16,9,2,20*840e4396
2018-12-04 16:59:17.810751 %RAWIMUSA,0,16425.990;0,16425.983525000,00250001,491,32,16,24,11,11*f009e545
2018-12-04 16:59:17.971362
2018-12-04 16:59:17.991376
2018-12-04 16:59:18.011390
2018-12-04 16:59:18.031404
2018-12-04 16:59:18.051418
2018-12-04 16:59:18.071433
2018-12-04 16:59:18.091446
2018-12-04 16:59:18.111460
2018-12-04 16:59:18.131474
2018-12-04 16:59:18.151488
2018-12-04 16:59:18.171502
2018-12-04 16:59:18.194518 %RAWIMUSA,0,16426.030;0,16426.023556000,00250001,492,32,16,15,13,35*cf4498a9
2018-12-04 16:59:18.196019 %RAWIMUSA,0,16426.040;0,16426.033564000,00250002,490,32,16,12,-16,32*285c7481
2018-12-04 16:59:18.197520 %RAWIMUSA,0,16426.050;0,16426.043571000,00250001,492,32,16,26,21,23*fce60dbf
2018-12-04 16:59:18.199021 %RAWIMUSA,0,16426.060;0,16426.053579000,00250001,494,33,16,10,10,18*ecebcd5c
2018-12-04 16:59:18.201023 %RAWIMUSA,0,16426.070;0,16426.063586000,00250001,491,32,16,16,-6,32*948490d5
2018-12-04 16:59:18.203524 %RAWIMUSA,0,16426.080;0,16426.073594000,00250002,491,32,16,15,27,28*b4abd72d

_______________________

However, you will notice that there is a significant 0.4 second jump between data points. This happens systematically and I can't figure out any way to improve it. The baudrate, bytesize, and stopbit values I know are correct. With my limited knowledge, I want to guess that its a buffering issue? But I don't know how any of that really works, which is why I am here looking for the experts!

I should note that I am using:
Windows 10
Python 3.7 with PyCharm 2018.2.4


Any ideas on how I can receive each message closer to the actual time steps of 0.01? without a 0.4 second jump.
Reply
#2
First of all, I am writing as I am thinking, so take this as something to think about rather than verbatim.
One thing that I notice is your baud rate is 9600.
If the data is being sent at 100 Hz, then the baud rate is fast enough to capture everything.

You need to read up on several things.

I think that you need to have the read routine running all the time in a thread, filling a FIFO (First in, First Out) buffer continuously as it reads.

Then you need another thread that will take data from the FIFO in chunks and transfer to either a process that will analyze it in whatever fashion you need, or to write the data to files of a certain size (starting a new one whenever sizes bytes are read. The data appears to have a well structured format, and I'm guessing each item is terminated with a newline. If this is the case, files should contain only whole records.

Now you can label the files with timestamps so that you know what order to process them in.

Finally, you can have multiple threads running that process the data. This will allow you to stay ahead of the flow.

Something to think about.
Reply
#3
Thanks for the response!

Quote:One thing that I notice is your baud rate is 9600.
If the data is being sent at 100 Hz, then the baud rate is fast enough to capture everything.

Haha yeah, I was playing with different baud rates to see if there was an effect. you can see that i commented out the actual baud rate of 460800. Although it didn't really seem to matter, as far as I can tell, what baud rate was set.

Quote:I think that you need to have the read routine running all the time in a thread, filling a FIFO (First in, First Out) buffer continuously as it reads.

Then you need another thread that will take data from the FIFO in chunks and transfer to either a process that will analyze it in whatever fashion you need, or to write the data to files of a certain size (starting a new one whenever sizes bytes are read. The data appears to have a well structured format, and I'm guessing each item is terminated with a newline. If this is the case, files should contain only whole records.

Now you can label the files with timestamps so that you know what order to process them in.

Finally, you can have multiple threads running that process the data. This will allow you to stay ahead of the flow.

I will definitely read up on this. I figured as much that there was a much smarter way to do this.

Can you by chance list a few commands that I can read up on or possibly elaborate more on running "multiple threads". Im not familiar with all the lingo, so I don't really know what a thread is haha.

Thanks for all the help!!! Big Grin
Reply
#4
This may seem like a stupid post, but I have been caught once or twice by this:
Quote:Haha yeah, I was playing with different baud rates to see if there was an effect. you can see that i commented out the actual baud rate of 460800. Although it didn't really seem to matter, as far as I can tell, what baud rate was set
100 hz is pretty slow: make sure baud rate matches on both ends!

for threading, recommend: https://pymotw.com/3/threading/
This site by Doug Hellmann (author of: The Python 3 Standard Library by Example)
It will step you through several examples.
Reply
#5
If I understand correctly, the frequency (100hz) is for the data measure, wich should be different from the frequency used for the data transmission. Very often, the data sensors manage their data for sending it by blocks, once time they have some, and not as soon as they get it. Your transmission is 9600 bauds, that means arround 1000 chars by second, wich is ten times quicker than 100hz ; it's probably why you get gaps between blocks, and the only way to improve it could be to set up a higher frequency (900 hz could be good), but is it possible ? Can you tell us what is your device ? It's needed in order to tell you more.
Anyway, it's very few data, even if your computer is very slow. For instance, the smaller raspberrypi is widely enough to get all, and write it down in a file.
You can use multithread if you like to produce an excellent and sophisticated job, but I believe it's not mandatory in that case. Just start a process with a very simple and strong script, writing in a file as you did. You can change 'w' in 'a' (for append), which will keep the existing data. From time to time, close the file and open a new one. Start a second process with the processing part. This one will have all time to manage as he wants, and for instance clean up the old files. Tell us what computer you use, but I'm pretty sure I don't know any computer or micro-controller too much slow to manage all of that. The 10$ raspberrypi will be a rolls-royce ; I remember having used it for a RFID reader, getting a lot of data, and it was ok.
Nice job, anyway, this kind of stuff is allways fun...
Reply
#6
Great thank you!

I have been reading up threading and the way I understand it, it is basically expanding your code from vertical to horizontal (running it in parallel). So if you had 5 workers, instead of each waiting for the one before it to be done before it can do its task, they can all do them at the same time. Here is a script from a video I watched:

import threading
import time


def talker(n, name):
    print('{} is sleeping\n'.format(name))
    time.sleep(n)
    print('{} has woken up!\n'.format(name))


start = time.time()

threads_list = []

for k in range(5):
    t = threading.Thread(target=talker, name='thread{}'.format(k),args=(5, 'thread{}'.format(k)))
    threads_list.append(t)
    t.start()


for t in threads_list:
    t.join()

end = time.time()

print('time taken: {}'.format(end-start))
With out threading this task would take 25 seconds. With threading, each worker does the same job at the same time. So the code only runs for 5 seconds:

Output:
thread0 is sleeping thread1 is sleeping thread2 is sleeping thread3 is sleeping thread4 is sleeping thread0 has woken up! thread2 has woken up! thread3 has woken up! thread4 has woken up! thread1 has woken up! time taken: 5.001632213592529
But how does this work with ser.readline Larz60+. How do I define a function to read the line and write it to a file?

Can I have workers perform different tasks?

Here's what I have tried:

import threading
import time
import datetime
import serial


ser = serial.Serial()
ser.port = 'COM2'
ser.baudrate = 460800
ser.timeout = 0.02
ser.bytesize = 8
ser.stopbits = 1

ser.open()

f = open('thread.txt', 'w')


def reader():
    line = ser.readline()
    time = datetime.datetime.now()
    f.write(str(time) + '    ' + str(line.replace(b'\r\n', b'').decode('utf-8')) + '\n')


while 1:
    start = time.time()

    threads_list = []

    for k in range(5):
        t = threading.Thread(target=reader)
        threads_list.append(t)
        t.start()

    for t in threads_list:
        t.join()

    end = time.time()

    print('time taken: {}'.format(end-start))


ser.close()
and the result:

Output:
2018-12-13 17:11:43.397547 2018-12-13 17:11:43.397547 2018-12-13 17:11:43.397547 2018-12-13 17:11:43.397547 2018-12-13 17:11:43.397547 2018-12-13 17:11:43.418060 2018-12-13 17:11:43.418060 2018-12-13 17:11:43.418060 2018-12-13 17:11:43.418060 2018-12-13 17:11:43.418060 2018-12-13 17:11:43.438575 2018-12-13 17:11:43.438575 2018-12-13 17:11:43.438575 2018-12-13 17:11:43.438575 ...
It's not even reading the line from the port.
Reply
#7

  1. Are the 'lines' worthy of individual files, or are they just lines?
  2. is there a sequence that has to be followed as to which line goes first?
If 1 is true, open and close each file in thread.
If 2 is true, you will need to create a FIFO (First in First Out) buffer you might want to look at: https://pypi.org/project/thread6/

there are other packages available, see: https://pypi.org/search/?q=FIFO+thread+buffer
Reply
#8
I'm not sure to fully understand, and specifically why you should need threading for that need. A serial line is 'serial' and so it looks not appropriate to read it from many parts. So why threading just for two processes ?

I tried a simulation with another approach : getting data from serial (simulated), and writing it to a file, and then, reading that file with a different process.

The script getting and writing :
python3 processPut.py
# processPut.py
import random,time

i = 0
while True:
	# serial data reading simulation
	r = random.random()
	data = str(r)
	# writing in a new file
	i += 1
	open('serialdata/%s'%i, 'w').write(data)
	# wait for the next data
	time.sleep(r)
The one reading and processing :
python3 processGet.py
# processGet.py
import os,os.path,time

while True:
	pathname = 'serialdata'
	print('-----reading files from %s'%pathname)
	filelist = os.listdir(pathname)
	filelist.sort()
	for filename in filelist:
		filepath = os.path.join(pathname,filename)
		# get data
		print(filepath)
		data = open(filepath,'r').readlines()
		if data:
			print(data)
		# delete the file
		os.unlink(filepath)
	# wait for the next data
	time.sleep(2)
The result :
-----reading files in serialdata
-----reading files in serialdata
-----reading files in serialdata
serialdata/1
['0.6398039220815236']
serialdata/2
['0.4881670808639216']
-----reading files in serialdata
serialdata/3
['0.6014663230495386']
serialdata/4
['0.4493510590148364']
serialdata/5
['0.1401244033524185']
serialdata/6
['0.7529608603800543']

and so on...
One advantages is your data stay on a filesystem (could be considered as a disadvantage...).
Another is all the stuff is managed by simple tools from the os, easy to use and maintain.
Reply
#9
JeanMichel's point is correct. So unless you should follow his advise.
Nevertheless, The packages that I pointed out may be a good study for learning threading at another time.
Reply
#10
Quote:jeanMichelBain

It's just an INS package. Only (as far as I know) I have to run the NovAtel factory program to communicate with the IMU sensor. Since that program actively uses the COM4 Port, I use Eltima software (Virtual Serial Port Driver Pro) to split COM4 into two virtual ports, COM4* and COM2* (* meaning virtual). The device talks to the program over COM4 and that information is duplicated on COM2*. Here I read it using my python script. I need to save this data stream for post-processing. It is vital that each line I receive from the device be timestamped so that I can sync it with our other sensors during post processing.

My problem is (as originally posted) that there seems to be a 0.4 second jump consistently through the data stream.

Quote:2018-12-04 16:59:17.806247 %RAWIMUSA,0,16425.970;0,16425.963510000,00250001,491,32,16,19,-13,24*1ac8f7bf
2018-12-04 16:59:17.807748 %RAWIMUSA,0,16425.980;0,16425.973518000,00250001,492,32,16,9,2,20*840e4396
2018-12-04 16:59:17.810751 %RAWIMUSA,0,16425.990;0,16425.983525000,00250001,491,32,16,24,11,11*f009e545
2018-12-04 16:59:17.971362
2018-12-04 16:59:17.991376
2018-12-04 16:59:18.011390
2018-12-04 16:59:18.031404
2018-12-04 16:59:18.051418
2018-12-04 16:59:18.071433
2018-12-04 16:59:18.091446
2018-12-04 16:59:18.111460
2018-12-04 16:59:18.131474
2018-12-04 16:59:18.151488
2018-12-04 16:59:18.171502
2018-12-04 16:59:18.194518 %RAWIMUSA,0,16426.030;0,16426.023556000,00250001,492,32,16,15,13,35*cf4498a9
2018-12-04 16:59:18.196019 %RAWIMUSA,0,16426.040;0,16426.033564000,00250002,490,32,16,12,-16,32*285c7481
2018-12-04 16:59:18.197520 %RAWIMUSA,0,16426.050;0,16426.043571000,00250001,492,32,16,26,21,23*fce60dbf
2018-12-04 16:59:18.199021 %RAWIMUSA,0,16426.060;0,16426.053579000,00250001,494,33,16,10,10,18*ecebcd5c
2018-12-04 16:59:18.201023 %RAWIMUSA,0,16426.070;0,16426.063586000,00250001,491,32,16,16,-6,32*948490d5
2018-12-04 16:59:18.203524 %RAWIMUSA,0,16426.080;0,16426.073594000,00250002,491,32,16,15,27,28*b4abd72d

I tell the sensor to log in ASCII, as you can see in "RAWIMUSA". It provides a header with each line and then the 6 IMU measurements. This is documented by NovAtel on their website

This 0.4 second jump is causing issues and I would like to clean it up if possible.

Any thoughts?
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  pyserial/serial "has no attribute 'Serial' " gowb0w 9 3,316 Aug-24-2023, 07:56 AM
Last Post: gowb0w
  How would you (as an python expert) make this code more efficient/simple coder_sw99 3 1,751 Feb-21-2022, 10:52 AM
Last Post: Gribouillis
  Reading UDP from external device without device software ikdemartijn 2 3,336 Dec-03-2019, 04:29 PM
Last Post: Larz60+
  Display device details i.e connected with more than one device shintonp 6 5,248 May-10-2017, 06:00 AM
Last Post: shintonp

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020