May-30-2017, 02:46 PM
I was sneered at for having too broad a question on stackOverflow, so I hope this is a reasonable place for a poorly-defined question, because I need help in making it better-defined.
I've been working on refining a solution to the strategy game Qubic, and have close to 100 million positions solved. That's actually just a beginning, so you get some sense of the scope of what I'm working on. There is a known solution, but it is not optimal. I want optimal. It may take 10 years.
I've been doing the work mostly in C and Python. C where it has to be fast (the actual solver) and Python where the development has to be fast (everywhere else) because I am the bottleneck, since I'm having trouble putting together a framework that organizes the solving process.
I have several multi-core machines: 2 Core i-7s, a 16-core Xeon and a Core i5 laptop. The Xeon also has a thousand or so available GPU cores on a spare nVidia card, but I haven't even started coding for that.
So I want to create a framework that will manage the solver. The main problem is that I cannot predict how long it will take so solve any position. The way I have been working, they may take 6 minutes on my fastest machine, or may exhaust my patience after a couple of weeks (at which point I usually have to take the machine down for updates or some such). So I plan on a time limit of a few hours. When a solver times out, I plan to derive the possible subsequent positions (after 2 moves) and solve those. Generally, almost all of the subsequent positions involve blunders, and are solved very quickly and I'm left with one or two that are slightly easier to solve than the original position. Positions are expressed as 64-character strings. Solutions (or failures) are expressed in somewhat longer strings, less than 512 characters.
I've started looking at Python modules multiprocess, subprocess, threading, and pyzmq. All are interesting, and I'm having trouble deciding which to use. I need some guidance.
I track my solutions with an SQLite database and module sqlite3, and this database could also be used by the framework.
I've been working on refining a solution to the strategy game Qubic, and have close to 100 million positions solved. That's actually just a beginning, so you get some sense of the scope of what I'm working on. There is a known solution, but it is not optimal. I want optimal. It may take 10 years.
I've been doing the work mostly in C and Python. C where it has to be fast (the actual solver) and Python where the development has to be fast (everywhere else) because I am the bottleneck, since I'm having trouble putting together a framework that organizes the solving process.
I have several multi-core machines: 2 Core i-7s, a 16-core Xeon and a Core i5 laptop. The Xeon also has a thousand or so available GPU cores on a spare nVidia card, but I haven't even started coding for that.
So I want to create a framework that will manage the solver. The main problem is that I cannot predict how long it will take so solve any position. The way I have been working, they may take 6 minutes on my fastest machine, or may exhaust my patience after a couple of weeks (at which point I usually have to take the machine down for updates or some such). So I plan on a time limit of a few hours. When a solver times out, I plan to derive the possible subsequent positions (after 2 moves) and solve those. Generally, almost all of the subsequent positions involve blunders, and are solved very quickly and I'm left with one or two that are slightly easier to solve than the original position. Positions are expressed as 64-character strings. Solutions (or failures) are expressed in somewhat longer strings, less than 512 characters.
I've started looking at Python modules multiprocess, subprocess, threading, and pyzmq. All are interesting, and I'm having trouble deciding which to use. I need some guidance.
I track my solutions with an SQLite database and module sqlite3, and this database could also be used by the framework.
my code here