Memory error in optimization run.
Hi, I am getting memory error while running optimization and the run does not continue after that.
-E--Strategy::init--loop params-- xception in thread Thread-3: Traceback (most recent call last): File "C:\Python27\lib\threading.py", line 810, in __bootstrap_inner self.run() File "C:\Python27\lib\threading.py", line 763, in run self.__target(*self.__args, **self.__kwargs) File "C:\Python27\lib\multiprocessing\pool.py", line 389, in _handle_results task = get() MemoryError
I am using 4 parameters variations..
cerebro.optstrategy(TestStrategy,smaperiod_fast=xrange(40,150,15), smaperiod_slow=range(150,300,15), p1=range(25,45,8), p2=range(50,300,15))
However the max memory consumed is 1.7 GB, with 32 bit it should go upto 4GB without any issue. Attached is snapshot of the all the processes spawned for optimization run.
Any idea what could be wrong here... I am using backtrader Release 18.104.22.168
The behavior of 32 bit processes under Windows and the 4 GBytes vs 2 GBytes topic is even in the Wikipedia
Without knowning the actual internals of Windows, 1.7 GBytes is possibly what the user gets reported and there are some additional 100-200 Mbytes of additional things not directly assigned to the process (external DLLs used by the Python process, for example) which already bring the 1.7 GBytes very close to the 2 GBytes limit.
well, my data size is not big... its 1 min ohlc bar for 1 year and resampled to 5 mins compression.
When I ran it with about 50 iteration it works fine. A higher number of iterations (150++) gives this error, which means memory is increasing per iteration.
I see any point of time 10 process spawned by backtrader for this optimization run, is there a way it can release the memory from some of the previous runs as it progresses into next iterations....
Withoug going into calculations (which is not relevant):
Small data * times+
indicators (and associated sub-indicators) * timeswill end up being something big.
10 processes can be broken down to:
- 4 core computer
- 2 threads per core
For a total of
4 x 2workers, plus 2 additional python processes (there may be a master created by the
multiprocessingmodule) for a grand total of 10. Seems right.
If you release the memory from the previous iterations you lose the results. If you don't have complex resample/replay scenarios, the suggestion is to use
exactbars=1when creating/running the cerebro, which tries to reduce the buffers to the minimum.
Or you break your optimization into different runs to make them fit within the limits of your machine.