Python Forum
Random access binary files with mmap - drastically slows with big files - Printable Version

+- Python Forum (https://python-forum.io)
+-- Forum: Python Coding (https://python-forum.io/forum-7.html)
+--- Forum: General Coding Help (https://python-forum.io/forum-8.html)
+--- Thread: Random access binary files with mmap - drastically slows with big files (/thread-19175.html)



Random access binary files with mmap - drastically slows with big files - danart - Jun-16-2019

I've posted a similar question to stackoverflow but unfortunately didn't get any answers there. Maybe this is a more python-specific issue and I'll get some information here.

I got some dataset where I have a bunch of large files (~100) and I want to extract specific lines from those files very efficiently (both in memory and in speed).

My code gets a list of relevant files, the code opens each file with [line 1], then maps the file to memory with [line 2], also, for each file I receives a list of indices and going over the indices I retrieve the relevant information (10 bytes for this example) like so: [line 3-4], finally I close the handles with [line 5-6].

binaryFile = open(path, "r+b")
binaryFile_mm = mmap.mmap(binaryFile.fileno(), 0)
for INDEX in INDEXES:
    information = binaryFile_mm[(INDEX):(INDEX)+10].decode("utf-8")
binaryFile_mm.close()
binaryFile.close()
This codes runs in parallel, with thousands of indices for each file, and continuously do that several times a second for hours.

Now to the problem - The code runs well when I limit the indices to be small (meaning - when I ask the code to get information from the beginning of the file). But! when I increase the range of the indices, everything slows down to (almost) a halt AND the buff/cache memory gets full (I'm not sure if the memory issue is related to the slowdown).

So my question is why does it matter if I retrieve information from the beginning or the end of the file and how do I overcome this in order to get instant access to information from the end of the file without slowing down and increasing buff/cache memory use.

PS - some numbers and sizes: so I got ~100 files each about 1GB in size, when I limit the indices to be from the 0%-10% of the file it runs fine, but when I allow the index to be anywhere in the file it stops working.


RE: Random access binary files with mmap - drastically slows with big files - danart - Jun-17-2019

Posted a question with code here: https://stackoverflow.com/questions/56629602/python-mmap-slow-access-to-end-of-files-with-test-code

Here is the code (tested with python 3.5, requires 10 GB of storage):
import os, errno, sys
import random, time
import mmap



def create_binary_test_file():
	print("Creating files with 3,000,000,000 characters, takes a few seconds...")
	test_binary_file1 = open("test_binary_file1.testbin", "wb")
	test_binary_file2 = open("test_binary_file2.testbin", "wb")
	test_binary_file3 = open("test_binary_file3.testbin", "wb")
	for i in range(1000):
		if i % 100 == 0 :
			print("progress -  ", i/10, " % ")
		# efficiently create random strings and write to files
		tbl = bytes.maketrans(bytearray(range(256)),
	                      bytearray([ord(b'a') + b % 26 for b in range(256)]))
		random_string = (os.urandom(3000000).translate(tbl))
		test_binary_file1.write(str(random_string).encode('utf-8'))
		test_binary_file2.write(str(random_string).encode('utf-8'))
		test_binary_file3.write(str(random_string).encode('utf-8'))
	test_binary_file1.close()
	test_binary_file2.close()
	test_binary_file3.close()
	print("Created binary file for testing.The file contains 3,000,000,000 characters")




# Opening binary test file
try:
    binary_file = open("test_binary_file1.testbin", "r+b")
except OSError as e: # this would be "except OSError, e:" before Python 2.6
    if e.errno == errno.ENOENT: # errno.ENOENT = no such file or directory
    	create_binary_test_file()
    	binary_file = open("test_binary_file1.testbin", "r+b")




## example of use - perform 100 times, in each itteration: open one of the binary files and retrieve 5,000 sample strings
## (if code runs fast and without a slowdown - change k to 50000 and it should reproduce the problem)

## Example 1 - getting information from start of file
print("Getting information from start of file")
etime = []
for i in range(100):
	start = time.time()
	binary_file_mm = mmap.mmap(binary_file.fileno(), 0)
	sample_index_list = random.sample(range(1,100000-1000), k=50000)
	sampled_data = [[binary_file_mm[v:v+1000].decode("utf-8")] for v in sample_index_list]
	binary_file_mm.close()
	binary_file.close()
	file_number = random.randint(1, 3)
	binary_file = open("test_binary_file" + str(file_number) + ".testbin", "r+b")
	etime.append((time.time() - start))
	if i % 10 == 9 :
		print("Iter ", i, " \tAverage time - ", '%.5f' % (sum(etime[-9:]) / len(etime[-9:])))
binary_file.close()


## Example 2 - getting information from all of the file
print("Getting information from all of the file")
binary_file = open("test_binary_file1.testbin", "r+b")
etime = []
for i in range(100):
	start = time.time()
	binary_file_mm = mmap.mmap(binary_file.fileno(), 0)
	sample_index_list = random.sample(range(1,3000000000-1000), k=50000)
	sampled_data = [[binary_file_mm[v:v+1000].decode("utf-8")] for v in sample_index_list]
	binary_file_mm.close()
	binary_file.close()
	file_number = random.randint(1, 3)
	binary_file = open("test_binary_file" + str(file_number) + ".testbin", "r+b")
	etime.append((time.time() - start))
	if i % 10 == 9 :
		print("Iter ", i, " \tAverage time - ", '%.5f' % (sum(etime[-9:]) / len(etime[-9:])))
binary_file.close()