well I've been thinking that with few thousand of elements, some loop may be avoided, using Kronecker product (np.kron);
Just an example with 5 million of calculations (with no loop):
Nonetheless the amont of memory becomes too high with several million of elements i.e. for a more general modelling, and using loops cannot be avoided in my mind; I'm wondering if Numba can help (I've never used it so far, but it seems promising if the necessary numpy capabilities have been implemented).
Your solution may be a mix between loops (1 per time step) and something close to the previous example
Hope it helps you in a certain way
Paul
Just an example with 5 million of calculations (with no loop):
import numpy as np import time n1 = 1_000; # number of nodes in the ball n2 = 5_000; # number of nodes in the floor ## Matrixes build-up; the first column are the elements number elt1 = np.random.random((n1,4)); # ball elt2 = np.random.random((n2,4)); # floor i1 = np.arange(0,n1, dtype = np.float); # float necessarily i2 = np.arange(n1,n1+n2, dtype = np.float); # just to have different numbers i2b = np.arange(0,n2, dtype = np.float); elt1[:,0] = i1; # the node number does not care elt2[:,0] = i2; # ball: the vect 1 is n1 blocs of n2 rows # equivalent to: vect 1 = [0 0 0 ... 0 0 0 1 1 1 1 ... 1 1 1 and son on], # floor; vect 2 is the blox of n2 nodes (floor) repeated n1 times # vect 1 and vect 2 are vectors of indexes of elt1 and elt2 respectively t0 = time.time(); j1 = np.ones(n2, dtype = np.int); # warning: indexes must be integer j2 = np.ones(n1, dtype = np.int); # warning: indexes must be integer i1 = i1.astype(int); # warning: indexes must be integer i2b = i2b.astype(int); # warning: indexes must be integer vect1 = np.kron(i1,j1); vect2 = np.kron(j2,i2b); NumberOfRows = np.shape(vect1)[0]; k = np.arange(0,NumberOfRows, dtype = np.int); distance_vector = np.sqrt( (elt2[vect2[k], 1] - elt1[vect1[k], 1])**2 + (elt2[vect2[k], 2] - elt1[vect1[k], 2])**2 + (elt2[vect2[k], 3] - elt1[vect1[k], 3])**2 ) minimum_distance = np.min(distance_vector); t1 = time.time(); ## check the distance between node1 of the ball to the node 1 of the floor # versus distance_vector[0] check = np.sqrt( (elt2[0, 1] - elt1[0, 1])**2 + (elt2[0, 2] - elt1[0, 2])**2 + (elt2[0, 3] - elt1[0, 3])**2 ) diff = check - distance_vector[0]; print("if diff equiv 0. then ok; here diff = {}".format(diff)); print("the {} calculations took {} second".format(NumberOfRows,t1-t0));(seems to be good - it tooks 0.7 second on my old laptop with 6 Go of RAM)
Nonetheless the amont of memory becomes too high with several million of elements i.e. for a more general modelling, and using loops cannot be avoided in my mind; I'm wondering if Numba can help (I've never used it so far, but it seems promising if the necessary numpy capabilities have been implemented).
Your solution may be a mix between loops (1 per time step) and something close to the previous example
Hope it helps you in a certain way
Paul