For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/

Out of memory error when running optstrategy



  • New to the Backtrader. I got following error when running the optstrategy in linux server. all the memory and the swap drained then error.

    Traceback (most recent call last):
      File "/home/dillonhao/anaconda3/lib/python3.7/threading.py", line 926, in _bootstrap_inner
        self.run()
      File "/home/dillonhao/anaconda3/lib/python3.7/threading.py", line 870, in run
        self._target(*self._args, **self._kwargs)
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/pool.py", line 412, in _handle_workers
        pool._maintain_pool()
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/pool.py", line 248, in _maintain_pool
        self._repopulate_pool()
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/pool.py", line 241, in _repopulate_pool
        w.start()
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/process.py", line 112, in start
        self._popen = self._Popen(self)
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/context.py", line 277, in _Popen
        return Popen(process_obj)
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/popen_fork.py", line 20, in __init__
        self._launch(process_obj)
      File "/home/dillonhao/anaconda3/lib/python3.7/multiprocessing/popen_fork.py", line 70, in _launch
        self.pid = os.fork()
    OSError: [Errno 12] Cannot allocate memory```
    
    I dig a little bit with tracemalloc and looks like the below multiprocessing consume 50% of the memory.
    ```#1: /home/dillonhao/anaconda3/lib/python3.7/multiprocessing/connection.py:251: 
        return _ForkingPickler.loads(buf.getbuffer())
    

    double check my bt code and have no clue of how to fix it without modify the bt code. Anyone can help? Thanks in advance

    ···
    import sys
    sys.path.append(r'/home/dillonhao/quant/lib/')
    from DataPiple.DataCheck import checkMissingDate
    import pickle
    import logging
    import backtrader as bt
    from backtrader.analyzers import (SQN, AnnualReturn, TimeReturn, SharpeRatio_A, SharpeRatio,
    TradeAnalyzer, drawdown)

    startcash = 1000000

    logging.basicConfig(level=loglevel)

    cerebro = bt.Cerebro()
    cerebro.broker.setcash(startcash)

    cerebro.addstrategy(WLXStrategy)

    cerebro.optstrategy(
    WLXStrategy,
    lmd=np.arange(0.8, 1, 0.01),
    avg = np.arange(1,4),
    dis = np.arange(0,0.02,0.002),

    filename = timeline

    )
    file_location = r'/home/dillonhao/data/history_candle_data'
    file_name = r'BTC-USDT_5m.csv'
    btData = getLocalData(file_location, file_name)
    timedic = timelist = getTimeSlot('2019-06-01', 60, 30, 4)[0]
    data = bt.feeds.PandasData(dataname=btData, fromdate=timedic['stime'], todate=timedic['etime'],
    timeframe=bt.TimeFrame.Minutes)
    cerebro.adddata(data)
    cerebro.addsizer(bt.sizers.PercentSizer, percents=100)
    cerebro.broker.setcommission(commission=0.0005)
    cerebro.addanalyzer(SharpeRatio, timeframe=bt.TimeFrame.Minutes, compression=30)
    cerebro.addanalyzer(SQN)
    cerebro.addanalyzer(AnnualReturn)
    cerebro.addanalyzer(TradeAnalyzer)
    result = cerebro.run(maxcpus=24)
    ···



  • I may be mistaken but my understanding is that the challenge with using the optimizer in backtrader is that it accumulates the results to return in a list. The more items being accumulated means more memory build up until released at the end.

    Have a look here at @backtrader comments to try and solve this problem.

    You can also consider breaking up your optimization into smaller runs.

    Personally I found joy in not using the built in optimizer but using mutliprocessing outside of backtrader and calling each backtest individually as a function. My saving each result to a spreadsheet or postgres table I avoid the built up using memory altogether.

    pool = multiprocessing.Pool(processes=multiprocessing.cpu_count() - 1)
                pool.map(st.backtest_controller_multi, scenarios)
                pool.close()
    ```
    where scenarios is a list of dictionaries with all of the possible combinations of parameters.


  • @Dillonhao said in Out of memory error when running optstrategy:

    cerebro.addstrategy(WLXStrategy)
    cerebro.optstrategy(

    Probably a small note: no need to both addstrategy and optstrategy for strategy optimization - optstrategy is enough - otherwise, you will have two strategies that will be running at the same time.



  • @run-out Thanks for your help. will give it a try.



  • @vladisld Sorry. the "cerebro.addstrategy(WLXStrategy)" was commented out in my code. There is a mismatch in the ``` and the markup translated it to a format. my bad. But thank you for the reply.



  • I'm usually using the following setup of Cerebro engine for the optimization:

    cerebro = bt.Cerebro(maxcpus=args.maxcpus,
    		                 live=False,
    		                 runonce=True,
    		                 exactbars=False,
    		                 optdatas=True,
    		                 optreturn=True,
    		                 stdstats=False,
    		                 quicknotify=True)
    

    This setting will remove some standard analyzers which automatically get added to the Cerebro engine otherwise. It also optimizes the memory used for reporting the results (instead of returning the strategy instances - only the parameters and analyzers are returned).

    Also, you may find it interesting to take a look at my recent post regarding the high memory consumption during strategy optimization using InfluxDB data feed - although it is not directly applicable to your case there could be some clues there as well:

    https://community.backtrader.com/topic/2397/high-memory-consumption-while-optimizing-using-influxdb-data-feed



  • update。
    looks like the sharp ratio analyzer is the cause. I cancel out the following code and fixed the problem.

    self.addanalyzer(SharpeRatio, timeframe=bt.TimeFrame.Minutes, compression=30)
    

Log in to reply
 

});