linux - Python thread generate tasks increasingly and consume too much memory and cpu -


i have file specific task , need run every 1 second. wrote program below , make file service in linux. start service, python thread generates task increasingly consumes cpu , memory. because maximum number of tasks in linux limited, after total of tasks goes beyond maximum, service crashes

as see in picture, number of tasks increasing time , high memory , cpu usage!!!

my service info

threads = [] def process():     t = threading.timer(interval=1, function=process)     t.start()     threads.append(t)     do_task()  if __name__ == '__main__':     process()      thd in threads:         thd.join() 

my question: how can limit thread? how can make sure no new tasks generate before other task running?

what have written there looks fork bomb or @ least very close it

your process function keeps spawning threads run same function inside of it, , running actual job it's supposed to. means end huge number of threads in short amount of time. quick fix in right direction work first, spawn thread, so:

def process():     do_task()     t = threading.timer(interval=1, function=process)     t.start()     threads.append(t) 

the important thing note here do_task() executed before additional threads being created

that being said, why need thread work @ hand , won't settle time.sleep instead?

import time while true:   time.sleep(1)   do_work() 

while won't guarantee job done once every second, have constant memory footprint, , if job takes long don't run out of resources either


Comments

Popular posts from this blog

Is there a better way to structure post methods in Class Based Views -

performance - Why is XCHG reg, reg a 3 micro-op instruction on modern Intel architectures? -

jquery - Responsive Navbar with Sub Navbar -