Python Forum

Full Version: PIL doesn't seem to paste full size image.
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
I have a folder full of images that I crop to a certain size (100x100 in this case) then I average the colour of the images.

Now for a bit of logic (this is where I might have gone wrong):
If I have an image of which the size is 567,376 and I make the pixels '100 times bigger' - I am actually replacing each pixel with one of the images that I cropped previously, which is technically scaling each pixel 100 times bigger - that in turn would make the image 100 times bigger.
See, if the image is 567 pixels wide, and each one is scaled by 100, it should become 56,700 pixels wide right? Well, it does. And that would mean each pixel position - [0,0], [0,1], [0,2], [0,3] - would be 100 times larger - [0,0], [0,100], [0,200] ,[0,300] At each of these new values, if I pasted a 100x100 image, after enough images it should fill up the new image entirely (after 567 images exactly - 567*100=56700)
However, for some reason, each image I paste seems to be much smaller than it should be.

Let me use the exact image I am using now - it is 768x432, meaning with images pixels 'scaled by 100' it would become 76800x43200, and each pixel is now an entire image. But this is the output I actually get:
[Image: J6bvFJ4.png]
It seems like each image is so much smaller - but according to my logic, there should be no empty space because the images should fill up the whole 'canvas'. Why is going on? Why are the pasted images so small?

Cropping code:
width, height = img.size   #get dimensions
crop_size = 100
if(crop_size > width or crop_size > height): #if image is too small to be cropped
     img.close() #just close it
left = (width - crop_size)/2 #crop from centre out
top = (height - crop_size)/2
right = (width + crop_size)/2
bottom = (height + crop_size)/2
try: #if the image was closed, it cant perform operations so the error will need to be caught
     img = img.crop((left, top, right, bottom))
Pasting code:
to_make = Image.open(options['to_create']) #open the image user wants to recreate
img = Image.new('RGB', (to_make.width*100, to_make.height*100), color = (0,0,0)) #create the new image 
for x in range(0,to_make.width):
    for y in range(0,to_make.height):
         pix_col = to_make.getpixel((x, y)) #get the pixel color
         closest = find_closest(pix_col, rgb) #find the closest matching image to the color of the pixel on the image
         a = Image.open(links[rgb.index(closest)]) #gets the path to the image using the rgb colour
         img.paste(a, (x*to_make.width, y*to_make.height))#pastes image at 100,100 rather than 1,1 otherwise there will be overlap
If each axis increases 100-fold, that would be a 10,000-fold increase in size (area) of the image. PIL is likely taking into account the aspect ratio and adjusting the change to each axis to maintain the 100-fold increase.
(Nov-20-2019, 05:34 PM)stullis Wrote: [ -> ]If each axis increases 100-fold, that would be a 10,000-fold increase in size (area) of the image. PIL is likely taking into account the aspect ratio and adjusting the change to each axis to maintain the 100-fold increase.

I sort of get you - could you explain a bit more please?
Let's say we have a 10 x 10 pixel image - a very tiny emote. If we increase each side by 100, the new size would be 1,000 x 1,000 px.

The area of the original is 100 px (10 x 10) and the area of the resized image is 1,000,000 px (1,000 x 1,000).

Compared to each other, the area of the resized image is 10,000 times large (1,000,000 / 100).

To ensure an image is properly resized, the algorithm likely bases it on the area. That way, an image that doubles in size covers double the area.
(Nov-20-2019, 06:00 PM)stullis Wrote: [ -> ]Let's say we have a 10 x 10 pixel image - a very tiny emote. If we increase each side by 100, the new size would be 1,000 x 1,000 px.

The area of the original is 100 px (10 x 10) and the area of the resized image is 1,000,000 px (1,000 x 1,000).

Compared to each other, the area of the resized image is 10,000 times large (1,000,000 / 100).

To ensure an image is properly resized, the algorithm likely bases it on the area. That way, an image that doubles in size covers double the area.
So, to fix the bug in my code, should I only increase the image by a scale of 10 rather than 100 because a scale of 100 will cause an increase of 10,000 in area whereas an increase in 10 will mean an increase in area by 1000?
Or am I still misunderstanding?
You have it backwards. If you want the image to be larger, you need to use a larger resize number. For instance with the 768 x 432 px image, you'll need to use 10,000 instead of 100 to make it 76800 x 43200 px.

Algebraically, that's because:
768 x 100 by 432 x 100 =(768 x 432) x (100 x 100) = (768 x 432) x 10,000.

If your intent is a 100-fold increase, then you already have it. What are the dimension of the resized image, by the way?
(Nov-20-2019, 06:10 PM)stullis Wrote: [ -> ]What are the dimension of the resized image, by the way?
I half the size of the image to save time and space - the image is 38400 x 21600 meaning the full size is 76800 x 43200.
Ohh, how silly could the mistake get - all it took was one print statement!
If you take a look at this line:
img.paste(a, ([python]x*to_make.width, y*to_make.height
))[/python]
mainly:
x*to_make.width, y*to_make.height
I am multiplying the x (and y) value by what I thought was 100. Remember these values?: 768x432, well, it turns out, 'to_make' is not 100x100 but 768x432. Meaning rather than doing:
[0x100, 0x100] [0x100, 1x100] ....
I was actually doing:
[0x768, 0x432] [0x768, 1x432] ....
meaning the position values were completely wrong!
So here's what the output now looks like:
[Image: m1FYjpZ.jpg]
Much better than before!