Python Forum
reduce CPU resources when using lot of "requests"
Thread Rating:
  • 0 Vote(s) - 0 Average
  • 1
  • 2
  • 3
  • 4
  • 5
reduce CPU resources when using lot of "requests"
#1
hi, sorry for my bad English,
I have this old PC : Intel i5-4460 4x3.20GHz, Windows 11,
I use it for Python-related, including downloading m3u8 files,
m3u8 files are small size files (usually under 2.5MB) that are parts of a complete online streaming video,
I use concurrent.futures as queue system, and requests as downloading system,
if I set the concurrent.futures.ThreadPoolExecutor(max_workers=1), it fine, but the download speed is too slow,
but if I set the max_workers=2, the download speed is good, but my CPU usage is 50%+
and if I set it above 2 (or not set at all), the download speed is great, but my PC often to Black Screen Of Death(BSOD),
Do you have any tips on how to "fix" this?
Reply
#2
(Oct-11-2024, 02:16 AM)kucingkembar Wrote: if I set the concurrent.futures.ThreadPoolExecutor(max_workers=1), it fine, but the download speed is too slow,
but if I set the max_workers=2, the download speed is good, but my CPU usage is 50%+
and if I set it above 2 (or not set at all), the download speed is great, but my PC often to Black Screen Of Death(BSOD),
Do you have any tips on how to "fix" this?
The problem can be old PC as ThreadPoolExecutor should not use much CPU(bound to 1 CPU),
ProcessPoolExecutor will use more CPU's but if problem is there using 1 CPU it will only be worse.
Can try to write a liter solution for CPU.
Eg Asyncio to make the download process lighter on the CPU.
Since downloading files is more of an I/O-bound task,the can use it without putting too much strain on your CPU.
import asyncio
# pip install aiohttp
import aiohttp

async def download_file(session, url, output_path):
    try:
        async with session.get(url) as response:
            if response.status == 200:
                with open(output_path, 'wb') as f:
                    while chunk := await response.content.read(1024):
                        f.write(chunk)
                print(f"Downloaded: {output_path}")
            else:
                print(f"Failed to download {url}: Status {response.status}")
    except Exception as e:
        print(f"Error downloading {url}: {e}")

async def main(urls):
    async with aiohttp.ClientSession() as session:
        tasks = []
        for i, url in enumerate(urls):
            output_path = f"file_{i}.ts"  # Adjust output filename as needed
            task = asyncio.ensure_future(download_file(session, url, output_path))
            tasks.append(task)        
        await asyncio.gather(*tasks)

urls = [
    "http://example.com/video1.ts",
    "http://example.com/video2.ts",
    # Add more URLs here
]

asyncio.run(main(urls))
Reply


Possibly Related Threads…
Thread Author Replies Views Last Post
  What are some of the best resources for learning Python? Vadanane 2 1,724 Jan-23-2023, 11:24 AM
Last Post: Larz60+
  python resources for HW Torun_Smok 1 47,211 Sep-10-2018, 11:08 PM
Last Post: Larz60+

Forum Jump:

User Panel Messages

Announcements
Announcement #1 8/1/2020
Announcement #2 8/2/2020
Announcement #3 8/6/2020