Jun-06-2018, 06:48 AM
Here is my situation: I have a well-trained model of speech synthesizing. I am going to speed up synthesizing paces with multiprocessing, which pre-loading the model in each CPU and then keep inputting the sentences for text-to-speech.
Here is my trying-script:
I appreciate so much if any suggestion post. :)
Here is my trying-script:
#################################################################### #!/usr/bin/python3 #################################################################### from multiprocessing import Process, Pool, cpu_count import os,time #################################################################### from tacotron.demo_synthesizer import Synthesizer from splitting_sent import splitting_para import tensorflow as tf from datasets import audio #################################################################### from pypinyin import pinyin, Style #################################################################### BASE_DIR = os.path.split(os.path.realpath(__file__))[0] VOICE = BASE_DIR + "/tmp" TXT = BASE_DIR + "/txt" os.makedirs(VOICE, exist_ok=True) os.makedirs(TXT, exist_ok=True) #################################################################### def syn(py): synthesizer = Synthesizer() synthesizer.load("/path/to/the/model") wav_name = time.time() wav_path = VOICE + "/" + str(wav_name) wav = synthesizer.synthesize(py) audio.save_wav(wav, wav_path) if __name__ =='__main__': with open(os.path.join(TXT, "content.txt"), "r") as f: lines = f.read().splitlines() lines = "".join(lines) sentences = splitting_para(lines) # splitting paragraph into individual sentences. py_list = [] for sent in sentences: py_sent = pinyin(sent, style=Style.TONE3) py_sent = " ".join([i[0] for i in py_sent if i[0].isalnum()]) py_list.append(py_sent) # as I am trying the Chinese TTS, it is the inevitable prerequisite step of translating Chinese character into Pinyin. print('Run the main process (%s).' % (os.getpid())) mainStart = time.time() p = Pool(cpu_count()) for py in py_list: p.apply_async(syn,args=(py,)) print('Waiting for all subprocesses done ...') p.close() p.join() print('All subprocesses done') mainEnd = time.time() print('All process ran %0.2f seconds.' % (mainEnd-mainStart))I am stuck on this issue: I could only pre-load 12 models into 12 processes for synthesizing the random sentences. However, it is impossible to carry on inputting the next 12 sentences into the pre-load model. The processes were terminated following by the first set of 12-sentences-TTS were finished. I am totally lost here. :(
I appreciate so much if any suggestion post. :)