Python Forum

Full Version: Hardware question re running Anaconda
You're currently viewing a stripped down version of our content. View the full version with proper formatting.
Hi everyone

I have written a computationally intensive program which I have optimised for speed by using Numpy. Still its a bit slow for my needs and I am looking at other ways of running it faster.

I am running the program written in Python 3 using Anaconda on Windows 10.

One obvious solution is to upgrade my hardware. Like everyone I wish to minimise the cost of doing so so I have a few questions regarding the best way to get my system running faster.

First though I am running a AMD Phenon II x4 840 quad core cpu on a M5A97 motherboard with 8Gb ram, Nvidia Geforce 720 video card and a 500Gb ssd. Yes its pretty old.

The first question concerns whether upgrading my graphics card would have any effect on performance? Does Anaconda make use of the graphics card's capabilities when crunching numbers in Numpy?

Or would an upgraded motherboard and/or cpu be the best way to go?

What would be the most cost-effective way to improve my hardware to run python with Anaconda under Windows 10 faster?

Thanks Peter
No first hand experience, but look at CuPy https://cupy.chainer.org/
It's available in Anaconda https://anaconda.org/anaconda/cupy
Thanks

I will try Cupy. Is it going to run faster with an older graphics card like my Nvidia Geforce 720?

cheers Peter
(Jan-23-2020, 04:24 AM)pberrett Wrote: [ -> ]Is it going to run faster with an older graphics card like my Nvidia Geforce 720
I don't know if it will run faster (or how much faster), but both Nvidia Geforce 720 and Nvidia Geforce 720M cards are listed as CUDA-enabled:
https://developer.nvidia.com/cuda-gpus
So it's worth a try
Thanks

I will give it a try.

cheers Peter
It will be nice if you post a feedback on using CuPy and what the performance effects were
cpython compiles python 2.7 to super fast C code. then that compiles with gcc down to your architecture machine code. I think you can use the old numpy made for python 2.7 with cpython.

one thing I learned about numpy and python 3 is that you only get bad performance when you pick the wrong data types. fixed point decimal was slow but unsigned integers of a fixed size are fast. python resizes memory for integers at run time so it is even slow with integers by default. setting the memory size before run time is something that is always done in C. this is why when you do it in python you can get close to the performance in C. at some point you will need to just drop python completely and move to openMP + C or fortran. if you have a GPU workload it will be the cuda dev kit and C compiled in gcc. how hard do you want to work for that extra performance?