Python Forum
How to return values from For Loop via return in a function
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
How to return values from For Loop via return in a function
#1
I wrote a function with a for loop for my tensorflow object_detection which shows me the detected images' name inside of my test_images when I press buttons as I see them visually with cv2. However, I cannot return b values when I use return at the end of the function. How can we solve such issue in python (How to return values from For Loop via return in a function) ? thank you very much

def detected_objects_2():
  for image_path in TEST_IMAGE_PATHS:
    image = Image.open(image_path)

    folder_path = "test_images/" #folder path to your images 

    File_Lst = []

    for file in os.listdir(folder_path):
      File_Lst.append(file)
    
    dog_index = File_Lst.index('image1.jpg')           
    dog_str = File_Lst[dog_index]

    img = cv2.imread(folder_path + dog_str )
    cv2.destroyAllWindows()         
    cv2.imshow("Press KEYS to know which direction you want to go with your robot", img)

    image_np = load_image_into_numpy_array(image)
    image_np_expanded = np.expand_dims(image_np, axis=0)
    output_dict = run_inference_for_single_image(image_np, detection_graph)

    a, b = vis_util.visualize_boxes_and_labels_on_image_array(
      image_np,
      output_dict['detection_boxes'],
      output_dict['detection_classes'],
      output_dict['detection_scores'],
      category_index,
      instance_masks=output_dict.get('detection_masks'),
      use_normalized_coordinates=True,
      line_thickness=8)
    plt.figure(figsize=IMAGE_SIZE)
    cv2.destroyAllWindows()
    cv2.imshow("Object Detector", image_np)
    #print(b)
    #if b == 'turnLeft':
      #print("Turn Left !!!")
    #elif b == 'turnRight':
      #print("Turn Right !!!")
    #else:
      #print("NO DETECTION !!!")   
    
    k = cv2.waitKey(0)
    if k == ord('a'): # wait for 'a' key to upload traffic signs one by one
      cv2.destroyAllWindows()
      cv2.imshow("Object Detector", image_np)
      
    elif k == ord('s'):
      cv2.waitKey(0)
      cv2.destroyAllWindows()
      break    
  return b
Reply
#2
To return those values, you have to store them in a list outside of the loop and then return the loop:

def detected_objects_2():
    b_values = []
    for image_path in TEST_IMAGE_PATHS:
        image = Image.open(image_path)
     
        folder_path = "test_images/" #folder path to your images 
     
        File_Lst = []
     
        for file in os.listdir(folder_path):
            File_Lst.append(file)
         
        dog_index = File_Lst.index('image1.jpg')           
        dog_str = File_Lst[dog_index]
     
        img = cv2.imread(folder_path + dog_str )
        cv2.destroyAllWindows()         
        cv2.imshow("Press KEYS to know which direction you want to go with your robot", img)
     
        image_np = load_image_into_numpy_array(image)
        image_np_expanded = np.expand_dims(image_np, axis=0)
        output_dict = run_inference_for_single_image(image_np, detection_graph)
     
        a, b = vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          output_dict['detection_boxes'],
          output_dict['detection_classes'],
          output_dict['detection_scores'],
          category_index,
          instance_masks=output_dict.get('detection_masks'),
          use_normalized_coordinates=True,
          line_thickness=8)
        b_values.append(b)
        plt.figure(figsize=IMAGE_SIZE)
        cv2.destroyAllWindows()
        cv2.imshow("Object Detector", image_np)
        #print(b)
        #if b == 'turnLeft':
          #print("Turn Left !!!")
        #elif b == 'turnRight':
          #print("Turn Right !!!")
        #else:
          #print("NO DETECTION !!!")   
         
        k = cv2.waitKey(0)
        if k == ord('a'): # wait for 'a' key to upload traffic signs one by one
            cv2.destroyAllWindows()
            cv2.imshow("Object Detector", image_np)
           
        elif k == ord('s'):
            cv2.waitKey(0)
            cv2.destroyAllWindows()
            break    
    return b_values
Reply
#3
You can use a generator function to do this. (It's pretty idiomatic Python).
Reply
#4
(Jan-03-2019, 03:04 PM)stullis Wrote: To return those values, you have to store them in a list outside of the loop and then return the loop:

def detected_objects_2():
    b_values = []
    for image_path in TEST_IMAGE_PATHS:
        image = Image.open(image_path)
     
        folder_path = "test_images/" #folder path to your images 
     
        File_Lst = []
     
        for file in os.listdir(folder_path):
            File_Lst.append(file)
         
        dog_index = File_Lst.index('image1.jpg')           
        dog_str = File_Lst[dog_index]
     
        img = cv2.imread(folder_path + dog_str )
        cv2.destroyAllWindows()         
        cv2.imshow("Press KEYS to know which direction you want to go with your robot", img)
     
        image_np = load_image_into_numpy_array(image)
        image_np_expanded = np.expand_dims(image_np, axis=0)
        output_dict = run_inference_for_single_image(image_np, detection_graph)
     
        a, b = vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          output_dict['detection_boxes'],
          output_dict['detection_classes'],
          output_dict['detection_scores'],
          category_index,
          instance_masks=output_dict.get('detection_masks'),
          use_normalized_coordinates=True,
          line_thickness=8)
        b_values.append(b)
        plt.figure(figsize=IMAGE_SIZE)
        cv2.destroyAllWindows()
        cv2.imshow("Object Detector", image_np)
        #print(b)
        #if b == 'turnLeft':
          #print("Turn Left !!!")
        #elif b == 'turnRight':
          #print("Turn Right !!!")
        #else:
          #print("NO DETECTION !!!")   
         
        k = cv2.waitKey(0)
        if k == ord('a'): # wait for 'a' key to upload traffic signs one by one
            cv2.destroyAllWindows()
            cv2.imshow("Object Detector", image_np)
           
        elif k == ord('s'):
            cv2.waitKey(0)
            cv2.destroyAllWindows()
            break    
    return b_values

When I add the line of codes you put, I get this error now:

Traceback (most recent call last):
  File "/home/aykut/models/research/object_detection/Object_detection_main_file.py", line 237, in <module>
    detected_objects_2()
  File "/home/aykut/models/research/object_detection/Object_detection_main_file.py", line 209, in detected_objects_2
    line_thickness=8)
ValueError: too many values to unpack (expected 2)
How to destroy this error: ValueError: too many values to unpack (expected 2) ? thank you.

I believe that the error cause in the beginning of the for loop:

This is the path of my files I used in the for loop:
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'test{}.jpg'.format(i)) for i in range(1, 7) ]
This ValueError: too many values to unpack (expected 2) starts in here at the beginning of the for loop (I tried to print the image path and I got the same error so the error happens)
for image_path in TEST_IMAGE_PATHS:
    c = image_path
    print(c)
    image = Image.open(image_path) 
    image_np = load_image_into_numpy_array(image)
    image_np_expanded = np.expand_dims(image_np, axis=0)
    output_dict = run_inference_for_single_image(image_np, detection_graph)
......
However, I don't know how to solve this error. I have 6 images in my folder to test for object detection and when I press the buttons, cv2.show changes images to detect visually.

I will be very happy if someone would help me to solve this. I tried to find solution in internet but I couldn't manage to find any solution to solve this problem.

UPDATE:

I found the cause of the problem for the above error. But now I have another problem which when I try to print the function's return value b_values, I cannot print them about which the function doesn't return anything.

def main():

  c = detected_objects_2()

  if c == 'turnLeft':
    print("Turn Left is worked!!!")
  elif c == 'turnRight':
    print("Turn Right is worked!!!")
  else:
    print("NO DETECTION at all!!!")  


main()
Why my function doesn't return anything even everything seems fine ? thank you.
Reply
#5
It is because appended I re-write the code with this way to get the result of return value from function but still the same problem that function doesn't return any value.
def main():

  c = detected_objects_2()

  if c == "['turnLeft']":
    print("Turn Left is working!!!")
  elif c == "['turnRight']":
    print("Turn Right is working!!!")
  else:
    print("NO DETECTION at all!!!")  


main()
Anyone would help me how can I print the return values from the function to see if the code will work as the desired way ?

Currently, nothing is returned really :(.
Reply
#6
You've encountered two problems. First, the error you were receiving was due to assigning multiple variables and having too many variables to unpack. Like this:

def return_3_vars():
    return (1,2,3)

a, b = return_3_vars() # Assigns two variables but has three values to assign; raises error
The new problem you're encountering is that variable c is a list, not a single value. The comparisons are testing for a single string. You need to loop over c or slice it:

def main():
    c = detected_objects_2()

    for value in c:
        if value == "['turnLeft']":
            print("Turn Left is working!!!")
        elif value == "['turnRight']":
            print("Turn Right is working!!!")
        else:
            print("NO DETECTION at all!!!")  
 
main()
Reply
#7
(Jan-05-2019, 02:11 PM)stullis Wrote: You've encountered two problems. First, the error you were receiving was due to assigning multiple variables and having too many variables to unpack. Like this:

def return_3_vars():
    return (1,2,3)

a, b = return_3_vars() # Assigns two variables but has three values to assign; raises error
The new problem you're encountering is that variable c is a list, not a single value. The comparisons are testing for a single string. You need to loop over c or slice it:

def main():
    c = detected_objects_2()

    for value in c:
        if value == "['turnLeft']":
            print("Turn Left is working!!!")
        elif value == "['turnRight']":
            print("Turn Right is working!!!")
        else:
            print("NO DETECTION at all!!!")  
 
main()

I tried what you said but still I cannot see the results in python shell.

no results still

When I add print(b_values) inside of for loop, it generates but still when I go to other images, it makes a long array list inside of the same life. Otherwise, the code you wrote in the main are not working at all by python to show results in python shell.

[Image: jzMrZf1]
Reply
#8
Oh wait. I didn't remove the brackets in the logic statements:

def main():
    c = detected_objects_2()
 
    for value in c:
        if value == "turnLeft":
            print("Turn Left is working!!!")
        elif value == "turnRight":
            print("Turn Right is working!!!")
        else:
            print("NO DETECTION at all!!!")  
  
main()
Reply
#9
(Jan-05-2019, 02:45 PM)stullis Wrote: Oh wait. I didn't remove the brackets in the logic statements:

def main():
    c = detected_objects_2()
 
    for value in c:
        if value == "turnLeft":
            print("Turn Left is working!!!")
        elif value == "turnRight":
            print("Turn Right is working!!!")
        else:
            print("NO DETECTION at all!!!")  
  
main()

The result is still same which nothing appears :(

This is the whole code of the tensorflow to detect objects in the test image folder:
import matplotlib
matplotlib.use('Agg')
# # Imports

# In[1]:


import numpy as np
import os
import six.moves.urllib as urllib
import sys
import tarfile
import tensorflow as tf
import zipfile
import cv2
from distutils.version import StrictVersion
from collections import defaultdict
from io import StringIO
from matplotlib import pyplot as plt
from PIL import Image

# This is needed since the notebook is stored in the object_detection folder.
sys.path.append("..")
from object_detection.utils import ops as utils_ops

if StrictVersion(tf.__version__) < StrictVersion('1.9.0'):
  raise ImportError('Please upgrade your TensorFlow installation to v1.9.* or later!')


# ## Env setup

# In[ ]:


# This is needed to display the images.
#get_ipython().run_line_magic('matplotlib', 'inline')


# ## Object detection imports
# Here are the imports from the object detection module.

# In[ ]:


from utils import label_map_util

from utils import visualization_utils as vis_util


# # Model preparation 

# ## Variables
# 
# Any model exported using the `export_inference_graph.py` tool can be loaded here simply by changing `PATH_TO_FROZEN_GRAPH` to point to a new .pb file.  
# 
# By default we use an "SSD with Mobilenet" model here. See the [detection model zoo](https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md) for a list of other models that can be run out-of-the-box with varying speeds and accuracies.

# In[ ]:


# What model to download.
#MODEL_NAME = 'ssd_mobilenet_v1_coco_2017_11_17'
MODEL_NAME = 'trafficSign_turnLeft_turnRight_graph'
#MODEL_FILE = MODEL_NAME + '.tar.gz'
#DOWNLOAD_BASE = 'http://download.tensorflow.org/models/object_detection/'

# Path to frozen detection graph. This is the actual model that is used for the object detection.
PATH_TO_FROZEN_GRAPH = MODEL_NAME + '/frozen_inference_graph.pb'

# List of the strings that is used to add correct label for each box.
#PATH_TO_LABELS = os.path.join('data', 'mscoco_label_map.pbtxt')
PATH_TO_LABELS = os.path.join('data', 'labelmap.pbtxt')

#general_object_detection = 'ssd_mobilenet_v1_coco_2017_11_17'
#trafficSign_object_detection =''

# ## Download Model

# In[ ]:


#opener = urllib.request.URLopener()
#opener.retrieve(DOWNLOAD_BASE + MODEL_FILE, MODEL_FILE)
#tar_file = tarfile.open(MODEL_FILE)
#for file in tar_file.getmembers():
  #file_name = os.path.basename(file.name)
  #if 'frozen_inference_graph.pb' in file_name:
    #tar_file.extract(file, os.getcwd())


# ## Load a (frozen) Tensorflow model into memory.

# In[ ]:


detection_graph = tf.Graph()
with detection_graph.as_default():
  od_graph_def = tf.GraphDef()
  with tf.gfile.GFile(PATH_TO_FROZEN_GRAPH, 'rb') as fid:
    serialized_graph = fid.read()
    od_graph_def.ParseFromString(serialized_graph)
    tf.import_graph_def(od_graph_def, name='')


# ## Loading label map
# Label maps map indices to category names, so that when our convolution network predicts `5`, we know that this corresponds to `airplane`.  Here we use internal utility functions, but anything that returns a dictionary mapping integers to appropriate string labels would be fine

# In[ ]:


category_index = label_map_util.create_category_index_from_labelmap(PATH_TO_LABELS, use_display_name=True)


# ## Helper code

# In[ ]:


def load_image_into_numpy_array(image):
  (im_width, im_height) = image.size
  return np.array(image.getdata()).reshape(
      (im_height, im_width, 3)).astype(np.uint8)


# # Detection

# In[ ]:


# For the sake of simplicity we will use only 2 images:
# image1.jpg
# image2.jpg
# If you want to test the code with your images, just add path to the images to the TEST_IMAGE_PATHS.
PATH_TO_TEST_IMAGES_DIR = 'test_images'
TEST_IMAGE_PATHS = [ os.path.join(PATH_TO_TEST_IMAGES_DIR, 'test{}.jpg'.format(i)) for i in range(1, 7) ]
#TEST_IMAGE_PATHS = os.path.join(PATH_TO_TEST_IMAGES_DIR, 'test1.jpg')

#cwd = os.getcwd()
#files = os.listdir(cwd)
#print("Files in '%s': %s" % (cwd, files))

IMAGE_NAME = 'test2.jpg'

# Grab path to current working directory

CWD_PATH = os.getcwd()

# Path to image

PATH_TO_IMAGE = os.path.join(CWD_PATH,IMAGE_NAME)

# Size, in inches, of the output images.
IMAGE_SIZE = (12, 8)


# In[ ]:


def run_inference_for_single_image(image, graph):
  with graph.as_default():
    with tf.Session() as sess:
      # Get handles to input and output tensors
      ops = tf.get_default_graph().get_operations()
      all_tensor_names = {output.name for op in ops for output in op.outputs}
      tensor_dict = {}
      for key in [
          'num_detections', 'detection_boxes', 'detection_scores',
          'detection_classes', 'detection_masks'
      ]:
        tensor_name = key + ':0'
        if tensor_name in all_tensor_names:
          tensor_dict[key] = tf.get_default_graph().get_tensor_by_name(
              tensor_name)
      if 'detection_masks' in tensor_dict:
        # The following processing is only for single image
        detection_boxes = tf.squeeze(tensor_dict['detection_boxes'], [0])
        detection_masks = tf.squeeze(tensor_dict['detection_masks'], [0])
        # Reframe is required to translate mask from box coordinates to image coordinates and fit the image size.
        real_num_detection = tf.cast(tensor_dict['num_detections'][0], tf.int32)
        detection_boxes = tf.slice(detection_boxes, [0, 0], [real_num_detection, -1])
        detection_masks = tf.slice(detection_masks, [0, 0, 0], [real_num_detection, -1, -1])
        detection_masks_reframed = utils_ops.reframe_box_masks_to_image_masks(
            detection_masks, detection_boxes, image.shape[0], image.shape[1])
        detection_masks_reframed = tf.cast(
            tf.greater(detection_masks_reframed, 0.5), tf.uint8)
        # Follow the convention by adding back the batch dimension
        tensor_dict['detection_masks'] = tf.expand_dims(
            detection_masks_reframed, 0)
      image_tensor = tf.get_default_graph().get_tensor_by_name('image_tensor:0')

      # Run inference
      output_dict = sess.run(tensor_dict,
                             feed_dict={image_tensor: np.expand_dims(image, 0)})

      # all outputs are float32 numpy arrays, so convert types as appropriate
      output_dict['num_detections'] = int(output_dict['num_detections'][0])
      output_dict['detection_classes'] = output_dict[
          'detection_classes'][0].astype(np.uint8)
      output_dict['detection_boxes'] = output_dict['detection_boxes'][0]
      output_dict['detection_scores'] = output_dict['detection_scores'][0]
      if 'detection_masks' in output_dict:
        output_dict['detection_masks'] = output_dict['detection_masks'][0]
  return output_dict


def detected_objects_2():
    b_values = []
    for image_path in TEST_IMAGE_PATHS:
        image = Image.open(image_path)
      
        folder_path = "test_images/" #folder path to your images 
      
        File_Lst = []
      
        for file in os.listdir(folder_path):
            File_Lst.append(file)
          
        dog_index = File_Lst.index('image1.jpg')           
        dog_str = File_Lst[dog_index]
      
        img = cv2.imread(folder_path + dog_str )
        cv2.destroyAllWindows()         
        cv2.imshow("Press KEYS to know which direction you want to go with your robot", img)
      
        image_np = load_image_into_numpy_array(image)
        image_np_expanded = np.expand_dims(image_np, axis=0)
        output_dict = run_inference_for_single_image(image_np, detection_graph)
      
        a, b = vis_util.visualize_boxes_and_labels_on_image_array(
          image_np,
          output_dict['detection_boxes'],
          output_dict['detection_classes'],
          output_dict['detection_scores'],
          category_index,
          instance_masks=output_dict.get('detection_masks'),
          use_normalized_coordinates=True,
          line_thickness=8)
        b_values.append(b)

        plt.figure(figsize=IMAGE_SIZE)
        cv2.destroyAllWindows()
        cv2.imshow("Object Detector", image_np)

        #print(b_values)  
        k = cv2.waitKey(0)
        if k == ord('a'): # wait for 'a' key to upload traffic signs one by one
            cv2.destroyAllWindows()
            cv2.imshow("Object Detector", image_np)
            
        elif k == ord('s'):
            cv2.waitKey(0)
            cv2.destroyAllWindows()
            break    
    return b_values


def main():
  c = detected_objects_2()
  
  for value in c:
    if value == "turnLeft":
      print("Turn Left is working!!!")
    elif value == "turnRight":
      print("Turn Right is working!!!")
    else:
      print("NO DETECTION at all!!!")  
   
main()
The code seems correct but why it doesn't return anything I don't understand.
Reply
#10
Okay, let's track down the problem. Add a print() call to main to see what data is in c:

def main():
  c = detected_objects_2()
  print(c)
   
  for value in c:
    if value == "turnLeft":
      print("Turn Left is working!!!")
    elif value == "turnRight":
      print("Turn Right is working!!!")
    else:
      print("NO DETECTION at all!!!")  
If the data in c is not what you expect, then review vis_util.visualize_boxes_and_labels_on_image_array() to see what it returns.
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  Regex find string then return 10 character after it pyStund 6 1,495 Aug-04-2022, 11:26 PM
Last Post: Pedroski55
  Matplotlib scatter plot in loop with None values ivan_sc 1 2,234 Nov-04-2021, 11:25 PM
Last Post: jefsummers
  Sympy nonlinsolve does not return explicit solution axelle 1 2,193 Jan-20-2021, 11:38 AM
Last Post: Serafim
  A function to return only certain columns with certain string illmattic 2 2,170 Jul-24-2020, 12:57 PM
Last Post: illmattic
  Match string return different cell string Kristenl2784 0 1,398 Jul-20-2020, 07:54 PM
Last Post: Kristenl2784
  Basic storage of function values for monte carlo simulation glidecode 1 1,714 Apr-15-2020, 01:41 AM
Last Post: jefsummers
  how to get x values based on y-axis values from curvefit function python_newbie09 1 3,232 Sep-19-2019, 02:09 AM
Last Post: scidam
  Comparing Values Resulting from Function Outputs firebird 0 1,789 Jul-25-2019, 05:16 AM
Last Post: firebird
  Return ERROR when installing pyopencl Helcio_Sarabando 1 4,109 Sep-08-2018, 11:23 PM
Last Post: Larz60+
  Converting days to years in loop while computing values across grid cells Lightning1800 2 2,595 May-15-2018, 08:44 PM
Last Post: Lightning1800

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020