Backtrader Community

    • Login
    • Search
    • Categories
    • Recent
    • Tags
    • Popular
    • Users
    • Groups
    • Search
    For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/

    Memory Leak with multiple cerebros

    General Code/Help
    2
    7
    373
    Loading More Posts
    • Oldest to Newest
    • Newest to Oldest
    • Most Votes
    Reply
    • Reply as topic
    Log in to reply
    This topic has been deleted. Only users with topic management privileges can see it.
    • Kjiessar
      Kjiessar last edited by

      Hi,
      I got a memory leak somewhere and not familiar to find it in python, so trying to narrow it down.
      To state that from the beginning I don't think this is caused by bt but my own stupidity.

      What I do in a nutshell (code probably not runnable)

      import backtrader as bt
      def run(strategy, dataframe):
        cerebro = bt.Cerebro()
        cerebro.addststrategy(strategy)
        cerebro.adddata(dataframe)
        cerebro.run()
        cerebro.plot()
      
      
      strats=[Strat1, Strat2] # there are more and also different configurations, but tried to keep it simple
      data = loadData
      for s in strats:
        run(s, data)
      
      
      

      I'm also adding analysizer and writer if that makes a difference.
      I did not yet digg deep enough into the internals but read something about singletons, is that an issue?
      Is it discouraged to run multiple times like that?
      If this is documented could you point me to it? (I looked for it a like an hour but couldn't find it)
      Has the runonce constructor parameter something todo with it?(tryed but made no difference)
      Memory consumption curve over multiple runs
      memory-profile3.png

      I would also be happy with an confirmation (if this is the ase) like:
      "Nope you can do it like this. Memory shouldn't look like this, you f*cked up somewhere else"

      Kjiessar 1 Reply Last reply Reply Quote 0
      • Kjiessar
        Kjiessar last edited by

        Small correction/addition:
        cerebro.run( exactBars = -1)

        Just because I forgot it in the main thread:
        Hi to the developer and a big thank you. You created something that is really great and that many will never achieve in their live. You did something to help a lot of people and brought it onto a level that can be used on a production level. If anyhow possible I would like not only keep it at warm words but buy you a coffee or something. Just point me in the direction.

        1 Reply Last reply Reply Quote 0
        • Kjiessar
          Kjiessar @Kjiessar last edited by

          @kjiessar
          My current assumption is that it is the cerebro.plot() which causes the potential memory leak.
          So my humble opinion is that there is a circular references to the data somewhere, which hinders the garbage collector to clean it up.

          vladisld 1 Reply Last reply Reply Quote 0
          • vladisld
            vladisld @Kjiessar last edited by

            @kjiessar how the memory consumption changes without plotting?

            Kjiessar 2 Replies Last reply Reply Quote 0
            • Kjiessar
              Kjiessar @vladisld last edited by

              @vladisld
              Hi sorry for not providing that:

              memory-profile5.png

              Mhh maybe not the only issue I'm facing. Still continiously growing.

              My current workaround:

              results = []
              with Pool(multiprocessing.cpu_count, maxtasksperchild=1) as pool: 
                  result =pool.apply_async(run)
                  results.append(result)
                  [result.wait() for result in results]
                  pool.close()
                  pool.join()
              
              vladisld 1 Reply Last reply Reply Quote 0
              • Kjiessar
                Kjiessar @vladisld last edited by

                @vladisld
                Another thing I forgot ( sorry it's hard to find the relevant parts here) is that I'm saving the plots:

                def saveplots(strats, numfigs=1, iplot=True, start=None, end=None,
                            width=640, height=360, dpi=300, use=None, file_path = '', **kwargs):
                    plotter = plot.Plot(**kwargs)
                    figs = []
                    for stratlist in strats:
                        for si, strat in enumerate(stratlist):
                            rfig = plotter.plot(strat, figid=si * 100,
                                                numfigs=numfigs, iplot=iplot,
                                                start=start, end=end, use=use, width=width, height=height, dpi=dpi, constrained_layout=True, )
                            
                            figs.append(rfig)
                
                    for fig in figs:
                        for f in fig:
                            f.savefig(file_path, dpi=dpi, width=width, height=height, bbox_inches='tight', )
                

                I tried a lot like f.clf() and so on but nothing seemed to help.

                1 Reply Last reply Reply Quote 0
                • vladisld
                  vladisld @Kjiessar last edited by

                  @kjiessar said in Memory Leak with multiple cerebros:

                  with Pool(multiprocessing.cpu_count, maxtasksperchild=1) as pool:

                  Long ago I had the same issue when optstrategy was used with multiple strategies ( in which case Cerebro will use the multiprocessing Pool.imap for optimization.

                  Ended up changing the Pool.imap parameters to use chunksize=. See more here:

                  https://community.backtrader.com/topic/2397/high-memory-consumption-while-optimizing-using-influxdb-data-feed/2

                  1 Reply Last reply Reply Quote 1
                  • 1 / 1
                  • First post
                    Last post
                  Copyright © 2016, 2017, 2018, 2019, 2020, 2021 NodeBB Forums | Contributors