For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/

how to speed up when I backtest?



  • I use the computer which is 12 CPU and 64 G memory,so,I think,my computer is good,but when I use backtrader to backtest a strategy using 950 stocks as feed and using different timeframes,when I use a sample ,for example,two stocks,it is very quick,when I use 10 stocks,it is slow,but,when I use 950 stocks,I don't know how much time it need?although I konw that along I use more stocks,the more time it needs, however is there something or function can I speed up?


  • administrators

    @tianjixuetu said in how to speed up when I backtest?:

    however is there something or function can I speed up?

    backtrader is pure Python and Python is by nature limited to single thread execution (add on top the very dynamic nature and introspection probabilities of Python)



  • Thank you very much!

    Yesterday,when I backtest the 950 stocks,I find,after I load the all feeds,then my memory will add along the time goes,however,it is not go into the process 'next',can you help me? thank you.

    I have two doubt:
    the first one is that:when I test the 20 stocks,it is no quesiton,but when I add to 950 stocks,the progress just not go to next!
    the second is that,how to speed up and save memory,now,I just use 950 stocks,if I use 3000+ stocks?

    my startegy is use the 1 minute data and day data to just whether to buy the stock,and when the stock earns 2% or lose 2%,or not satisfy the constrain that choose the stock,close it too.

    my main code is in github:
    https://github.com/tianjixuetu/code_test/blob/1b855ad081e4723d8d6e36f68fe133a3ce7bc9c3/backtrader_more_timeframe_and _more_stock_backtest.py


  • administrators

    @tianjixuetu said in how to speed up when I backtest?:

    ,then my memory will add along the time goes,however,it is not go into the process 'next'

    You are loading so much data, that things are being preloaded and apparently in your case being resampled. Disable preload Docs - Cerebro. This is NOT going to reduce the total consumed time because it is actually, in most cases, going to increase it. But if you are fighting with memory contention and trashing it may actually help getting started.

    @tianjixuetu said in how to speed up when I backtest?:

    the second is that,how to speed up and save memory,now,I just use 950 stocks,if I use 3000+ stocks?

    Please refer to the previous answer.

    It seems you are in desperate need for speed and light memory usage. If this is so, Python is the wrong tool.



  • when I use the parameter preload=False,it is better than before.0_1532594455012_搜狗截图18年07月26日1634_1.png
    the all stock data is csv file and less than 15G;now,I have used almost 30G memory,
    0_1532594706248_搜狗截图18年07月26日1644_2.png
    but, the progress not yet go to the next (if it goes to next,it will print the datetime,and now this backtest is used TWO HOURS)
    so,I think,there is maybe something wrong.
    I need your help,thank you very much!


  • administrators

    You are apparently also doing things from inside a python-kernel hijacking environment (from the screenshot). That's for sure not going to help.

    If you are loading 15Gbytes of data, optimizing, resampling and god knows what else, it shouldn't surprise that things are slow, extremely slow.



  • I use notebook as the environment to research.so the python-kernel hijacking environment is jupyter notebook.After carefully consideration,I think,maybe we can use mongodb or mysql as database to store the data and get the data needed from the database,this may slow the speed,but I think,it can save a lot of memory.
    By the way,after SIX hours runs, it has'nt go into the 'next', I will give up ,and develop the framework by myself!


  • administrators

    @tianjixuetu said in how to speed up when I backtest?:

    I will give up ,and develop the framework by myself!

    Enjoy it!