Python Forum

Full Version: Get a value from another script
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hello,

I'm new here on the forum, as well as new to coding with Python in general. I have a question about reading values from another script. The setup is:

first.py (main loop. includes code that runs an lcd, graphics etc.)

second.py (another loop that collects data from a sensor. let's say a temperature reading)

I'd like to return a value from second.py and use it in first.py without holding up my main loop. I don't want to wait until a particular sensor processes the data and spits out a result. Let it save the result somewhere when it's done and I'm going to read it later. Right now I'm writing the sensor value of second.py into a .txt file and reading that value with first.py, but that seems like a wrong way of doing it. I know about modules and importing stuff, but not exactly sure how to do it. If I simply import the second.py, it takes over my main loop.

How can I handle this issue?
There are different possibilities.

The pragmatic one is, to have only one loop and only one running instance of the Python interpreter.
I'm using for my project multiprocessing to start all instances and exchange information between the processes.
Additionally I have to handle much data in the background for signal acquiring and processing. For this task I use
zmq, which is a message queue to exchange data between processes over network. You don't need a server for zmq, it's
a very low level library. The abstraction is not very high, but this allows fast processing.

If you have not so much data to exchange, the solution with multiprocessing may help.
For me it's a little refresher. Choose something which looks good for your use.
Please read also the related documentation.

import time
from multiprocessing import Manager, Process


def worker1(namespace, iterations):
    for i in range(iterations):
        namespace.worker1_var = i

def worker2(namespace, iterations):
    for i in range(iterations):
        result = namespace.worker1_var * i
        print('Worker2:', result)

print('Multiprocessing with namespace:')


# Create a manager, which starts a new Python
# Process to act as a manager between the other
# processes
manager = Manager()
# Namespace is like a class, where you can assign or get
# your variables
ns = manager.Namespace()
# Creating a new variable in the namespace
ns.worker1_var = 0


# create new processes and give the namesapce as argument to the function
proc1 = Process(target=worker1, args=(ns, 10))
proc2 = Process(target=worker2, args=(ns, 10))
# start processes
proc1.start()
proc2.start()
# wait until the processes are finished
proc1.join()
proc2.join()

print('\nMultiprocessing with queues:')
# queues are often the better for jobs
# for configuration or something like global state
# I prefer the namespace

def generator(queue):
    for i in range(10):
        queue.put(i)

def worker(queue):
    while True:
        print('Worker:' ,queue.get())

# Now make a queue object, which is handled by manager
queue = manager.Queue()
# now the manager controls the Queue between the processes
# everything important is handled like locking
proc_gen = Process(target=generator, args=(queue,))
proc_work = Process(target=worker, args=(queue,))
proc_gen.start()
proc_work.start()

proc_gen.join()
proc_work.terminate() # killing this endless process


# --------------------------------------

import threading
import zmq


print('\nZMQ Example in one process,\ncheating with threading to do this in one process.')

# this is for example in first.py
def zmq_publish(pub):
    for i in range(10):
        data = str(i).encode()
        #print('Data has been sent.')
        pub.send_multipart([b'velocity', data])

# in second.py for example
def printer(sub):
    while True:
        topic, data = sub.recv_multipart()
        value = int(data)
        print(topic.decode(), value)
        if value == 9:
            break

context = zmq.Context()
publisher = context.socket(zmq.PUB)
subscriber = context.socket(zmq.SUB)
publisher.bind('tcp://127.0.0.1:5090')
subscriber.connect('tcp://127.0.0.1:5090')
subscriber.subscribe(b'velocity')

# give the subscriber a little bit time to connect
# otherwise the data is already sent, before the
# subscriber has been connected
# and then this data is lost

time.sleep(1)

# this one the big benefits of zmq, because the order of connection
# doesn't matter.
# the publisher can already send, where there is no subsriber
# and reversed
# the connection is established magically in the background
#
# but you choose also the other concepts of zmq:
# Publish -> Subscribe
# Request <-> Reply
# Push -> Pull



thread = threading.Thread(target=zmq_publish, args=(publisher,))
thread.start()
# obviosly multiprocess.Process has the
# same interface like threading.Thread
# I'm using this only for demonstration
# in one single process
# This won't work with multiprocessing, because the context is
# creates not inside the generator/worker process
# with a thread it works because of shared memory

printer(subscriber)

# destroying the context
# closes everything
context.destroy()
You should start first with queues.
There are also other message queues, which have a bigger abstraction.
Some of them need a server like mqqt.

If you post your code, we can tell you which way could be the best.
It always depends on the task.
Why does that seem like the wrong way of doing it? That is the only way of doing it. You can not simultaneously access data in real time and not access data in real time regardless if you import the script or not. Maybe I am misunderstanding the question but it seems to me the system is working exactly how you want it to.
A strategy is that first.py starts a thread or another process that runs second.py. You need to include a mechanism in first.py to detect if the second.py's task is finished and get the result.

For a tkinter program, there is a general recipe here that does just that.
By the way, you can also use threading in one single process.
For GUI applications it's very important not to have blocking code.
Depending what you do exactly, it could also solved with synchronous code.

If you don't post your code, the discussion will not end :-D
first.py calls a function in second.py that returns a list (or whatever) of the data collected. You will have to be lot more specific, and provide some code, for anything further.