Dear community,
I have just written a very simple model in python that basically simulates human data in a 2-choice task. There are two experimental conditions. For each condition and simulated trial, the model returns the latency (in milliseconds) and accuracy (correct versus incorrect) of the response. Here the issue: I need to simulate 100000 trials (and ideally even more) in order to fit that model to data. Right now, the simulation takes ~1 minute.I need to considerably decrease that latency so that the code would take less that 1 second (if possible). Any help on this optimization problem would be highly appreciated. Peharps the best way would be to recode the model in Cython, but I don't know this language.
I have just written a very simple model in python that basically simulates human data in a 2-choice task. There are two experimental conditions. For each condition and simulated trial, the model returns the latency (in milliseconds) and accuracy (correct versus incorrect) of the response. Here the issue: I need to simulate 100000 trials (and ideally even more) in order to fit that model to data. Right now, the simulation takes ~1 minute.I need to considerably decrease that latency so that the code would take less that 1 second (if possible). Any help on this optimization problem would be highly appreciated. Peharps the best way would be to recode the model in Cython, but I don't know this language.
from __future__ import division import numpy as np import time def DMC (congruency, b_1, mu_r,A,mu_c, shape, tau,s_r, s_z, ntrials = 100000.): dt = 1 #integration constant 1 ms sigma = 4 tmax = 10000 t = np.linspace (dt, tmax, tmax/dt) A = A*congruency b_2 = -b_1 starting_point = 0 RT = [] response = [] mu = A*np.exp(-t/tau)* np.power(np.exp(1)*t/((shape-1)*tau), shape-1)*((shape-1)/t-1/tau)+ mu_c for i in np.arange (0,ntrials):#sum(alive) #loop until all trials are terminated X = np.random.uniform(starting_point - s_z/2,starting_point + s_z/2 ) time = 0 for j in range (0, tmax): time = time + dt dX = mu[j]*dt + sigma*np.sqrt(dt)*np.random.randn() X = X + dX if X >= b_1:#correct decision residual = np.random.uniform(mu_r-s_r/2, mu_r+ s_r/2) RT.append(time+residual) response.append(1) break if X <= b_2:#incorrect decision residual = np.random.uniform(mu_r-s_r/2, mu_r+ s_r/2) RT.append(time+residual) response.append(0) break RT = np.array(RT) response = np.array (response) return RT, response start = time.clock() RT_comp, response_comp = DMC(1, 65.8, 324, 25.1, 0.486, 2.3, 41.2, 42.7,43.9 ) RT_incomp, response_incomp = DMC(-1, 65.8, 324, 25.1, 0.486, 2.3, 41.2, 42.7,43.9) print time.clock() - start