Smart optimizations and Backtrader



  • Hi there!
    I've been trying to use Backtrader in order to backtest and optimize trading strategies.
    I've noticed that the optimizer is just going through all the parameter ranges without using any heuristics (for example hill-climbing).

    I tried to just create a cerebro instance, running it and getting the broke value at the end while using an optimizer from the outside, though this resulted in very slow performance, compared to optstrategy, even when considering multiple cpus.

    Does anyone here have any recommendations as to how to best use "smart" optimizers along with Backtrader?

    Thanks,
    Harel


  • administrators

    See this other thread: Community - Genetic Optimization



  • Hey, thanks for the reply.
    That's pretty much what I did.
    It runs much slower (per parameter combination) than just running cerebro.optstrategy.



  • @Harel-Rozental what did you use for smart optimization?



  • @ab_trader For now I used exactly what they said in that thread, optunity library.

    But the problem isn't in the time it takes to choose new parameters (this wouldn't be Backtrader's problem), the problem is the time it takes to conduct one test (i.e going through 1 month or 1 year of minute data).



  • I think I see what the source of the problem is.
    Creating the cerebro instance and loading the data is taking a lot of time.
    When using optunity it goes through the process of cerebro creation every time.

    Is there a way I can use the same instance (and same data) for every run, like it does when using cerebro.optstrategy ?



  • @Harel-Rozental said in Smart optimizations and Backtrader:

    It runs much slower (per parameter combination) than just running cerebro.optstrategy.

    I am confused by this statement. optstrategy for case shown in that thread with two parameters ranged [2, 55] will run cerebro 53 x 53 = 2,809 times. With optunity script will run cerebro only num_evals=100 times. How it can be slower?



  • Slower per iteration.
    The iteration including initializing cerebro is more than 2x slower than optstrategy when using large amounts of data.

    Even a month of minute data takes about 10 seconds per iteration.


  • administrators

    optstrategy reuses the loaded data and shares it across worker processes.

    The usual approach to reduce loading time is to restrict the file to the actual data which will be used in the backtesting.

    Some ideas here: Community - How to speed up backtest



  • Ok, got it now. .adddata() method takes long time. I use roughly 5,000+ bars so it doesn't affect me significantly.

    As a crazy idea (didn't check it, have no large data sets):

    def only_add_data_to_cerebro(data)
        cerebro = bt.Cerebro()    
        cerebro.adddata(data)
        return cerebro
    
    data = bt.feeds.YahooFinanceData(...)
    cerebro_with_data = only_add_data_to_cerebro(data)
    
    def optstrategy(params):
        cerebro_with_data.addstrategy(SmaCross, params)
        cerebro_with_data.run()
        return cerebro.broker.getvalue()
    

    optstrategy function will be used for optimization runs.

    I've used such approach with .plot() method (fully initialized and processed cerebro in the function) and it worked.



  • @backtrader I did restrict the file (actually I use a pandas dataframe) before loading it into a backtrader data object, though my restriction is changing minute-data from 7 years to 2 or 1 year. Even when resampling to say 30 mins, I still want the 1 minute to replay the data.
    I guess the reusing of optstrategy is what makes if faster, I'll try to mimic that.

    @ab_trader I don't know if running the same cerebro instance will work, but I will try something like it in order to, as I said, mimic optstrategy.



  • A bit of an update:
    I managed to get my optimizations to run as quick as optstrategy by subclassing Cerebro and pulling out some stuff out of run() and addstrategy() into different functions (one to initialize data loading, and another which gets re-executed from outside the class for optimization).
    I also manage multiprocessing from the outside.



  • @Harel-Rozental

    Do you run multiprocessing by means of optunity?



  • @ab_trader On a quick check I couldn't find a way to make optunity do multiprocessing, so I just run it [cores] times and take the best result.



  • @Harel-Rozental in the documents (one of the examples) they offered to use the following for parallel processing:

        for solver in solvers:
            pars, details, _ = optunity.minimize(f, num_evals=100, x=[-5, 5], y=[-5, 5],
                                                 solver_name=solver)
            # the above line can be parallelized by adding `pmap=optunity.pmap`
            # however this is incompatible with IPython
    

    I wasn't able to run it.



  • @Harel-Rozental said in Smart optimizations and Backtrader:

    I managed to get my optimizations to run as quick as optstrategy by subclassing Cerebro and pulling out some stuff out of run() and addstrategy() into different functions (one to initialize data loading, and another which gets re-executed from outside the class for optimization).
    I also manage multiprocessing from the outside.

    Do you mind to share some code? Or kind of a layout of what and where was implemented in more details?


Log in to reply
 

Looks like your connection to Backtrader Community was lost, please wait while we try to reconnect.