For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See:

Interactive Brokers Multiple reqMktData

  • I have a stategy that attaches to multiple IB symbols (10 aprox). However after backfilling from a file and connecting to IB I get the following error:

    Max rate of messages per second has been exceeded

    Is it possible that backtrader is launching too many requests to TWS ?

    I have been looking into backtrader code and I believe self.ib.reqMktData is being called too many times. N^2 times where N is the number of symbols/datas.

    At _runnext in is called for each datas (N).
    def _runnext(self, runstrats):
        for d in datas:
                    qlapse = datetime.datetime.utcnow() - qstart
                    d.do_qcheck(newqcheck, qlapse.total_seconds())
 ends up calling startdatas from get called. In startdatas we have :
    def startdatas(self):
        for data in self.datas:
             t = threading.Thread(target=data.reqdata)

    Which calls 's reqdata, which finally does:

    self.qlive = self.ib.reqMktData(self.contract)

    Therefore we have two for loops on self.datas.

    I have been able to avoid the max request problem by adding a timer.sleep at startdatas after waiting for the threads. But I believe there should be a better way.

    This is my first post and I am not familiar with the platform. I hope I am not too off and this has not been covered in some other post... My guess is that self.ib.reqMktData should only be done once per symbol. Am I missing something ?


  • My guess is that threading.Thread is a constructor only that doesn't start the thread until .start(). And start() looks to be triggered by a timer.

    However when downloading many symbols (37) I also receive the 'Max rate of messages per second exceeded' error. A puzzle.

  • I've encountered the similar issue and it seem to be a bug in backtrader code. In my case the 'minimal fix' was to just patch the 'reqdata' method to issue the subscription request only in case the feed status wasn't _ST_LIVE. Please see my post here. However a more proper fix should avoid multiple feed initializations altogether (it's more involved though).


  • @vladisld

    Interesting, and I tried your solution unfortunately the combination of my historical data requests combined with live requests still jointly caused a pacing error. I've created two solutions to throttling the connection, one ugly but simple, and one elegant but complex.

    The first, quite evil, solution is to simply wait one second every 50 transactions to ibstore/ibdata. Then call this routine _throt_delay before each ib request in ibstore/ibdata. Terrible, but on the bright side it becomes mathematically impossible to exceed IB's pacing restriction. And for most purposes this one second delay every once in a while is acceptable. The global is due to the the fact that both historical and live must add to the throttle count by calling _throt_delay before data request calls that come from a variety of routines in ibstore/ibdata.

    reqcount = 0            #throttle limits
    reqlmt = 50
        def _throt_delay():
            global reqcount, reqlmt
            if reqcount >= reqlmt - 1:
                reqcount = 0
            reqcount += 1

    The second, more elegant solution, is to throttle nothing unless it comes to the edge of maximum allowed transaction requests, and throttle the minimum at that point, and clear out the count as time passes. This routine also exploits the fact that IB only recalculates every four seconds or so, so up to 250 requests have no delay and beyond that only 1/50s per request:

    reqcount = 0            #throttle limits
    lastreqtime = time.time()
    reqlmt = 50
        def _throt_delay():
            global reqcount, lastreqtime
            throttime = 1 / reqlmt  # throttle delay in fraction of seconds
            reqcount += 1  # add one throt period
            reqcount -= int((time.time() - lastreqtime) / throttime)          # Decrement count by time passed:
            reqcount = max(reqcount, 0)  # keep it above zero
            lastreqtime = time.time()
            # sleep it off if the pacing is too fast...
            if reqcount >= 4 * reqlmt - 1:

    Now understandably Daniel R. would find the sleep abhorrent, although it's called minimally and with the absolute minimum period required, but this could be substituted with a thread blocker/barrier/thread primitive if appropriate. However the minimal and rare sleep avoids any concurrency issues with the threads just fine.

  • @bigdavediode said in Interactive Brokers Multiple reqMktData:

    the combination of my historical data requests combined with live requests still jointly caused a pacing error

    That sounds strange, given that you just subscribe for approx 10 symbols. It would be interesting to take a look at the TWS API logs to see what is really happening.

  • @vladisld

    My strategy subscribes to about 37 symbols which backfill_from a pandascsv along with live trading. The pandascsv has it's own non-ibstore historical request mechanism. Thus the occasional throttling required.

  • @bigdavediode said in Interactive Brokers Multiple reqMktData:

    My strategy subscribes to about 37 symbols

    It still hard to explain the "Max rate of messages per second has been exceeded" (error 100) without looking at the TWS API logs. AFAIK, TWS API has 50 msg/sec limitation averaged over 5 sec (i.e 250 / 5sec ) in addition to 60 historical data requests within 10 min and some other more obscure limitations (

    So although I agree that throttling mechanism should be in place ( I'd argue if it needs to be implemented in backtrader or in the lower component - ibpy, like the one implemented ib_insync library), I would still first make sure that there is no excessive requests generated by the current backtrader code by looking at the TWS API logs.

  • @vladisld

    Despite the IB documentation, my impression is that that the IB API waives the limit for most historical data request cases: "IB has a limitation, 1 historical data request per 10 seconds. According to this:,,,20,2,0,0::,,,0,0,0,24001598 "Currently IB waives that limit for bar sizes larger than 30 seconds." However, it does seem to add to the 50/s total request limit pacing. And that was my problem at the time. Live + historical = pacing error.

    So I ran some tests on historical data requests just now and pounded IB with data requests for historical 30 min bars, and 1 D bars and was not able to trip the 60 historical requests per ten minute period, nor the 1 data request every 10 seconds. So it's likely that the above statement that the limit is waived for bar sizes larger than 30 seconds is correct. If someone else wants to test 30 second or less bars that would be interesting.

    As for the sliding window request limit your statement of 250/5 secs is almost certainly correct although there is some mention on the web that it's a 4 second window. Thus my _throt_delay code assumes four to be conservative.

    Not sure if the throttling code could be put into ibpy without more understanding of its structure.

  • Just to bring one more point regarding the throttling mechanism. Starting from v.974 of TWS API there is a '+PACEAPI' connection option that is effectively pacing the incoming messages. Quoting (

    API messages sent at a higher rate than 50/second can now be paced by TWS at the 50/second rate instead of potentially causing a disconnection. This is now done automatically by the RTD Server API and can be done with other API technologies by invoking SetConnectOptions("+PACEAPI") prior to eConnect.

    Unfortunately I'm not sure IbPy library is updated to include the SetConnectOptions method. AFAIU the IbPy is just generating its source code from the official TWS Java API (by translating it to Python), but it hasn't been updated to the latest version yet. This is probably one of the reasons to directly use the official TWS Python API instead of IbPy.

  • @vladisld said in Interactive Brokers Multiple reqMktData:

    Starting from v.974 of TWS API there is a '+PACEAPI' connection option that is effectively pacing the incoming messages. Quoting (

    Hi Vladislav,
    do you know if we have any resolution / workaround for that issue in BT/IBpy2?
    I've searched for it with no luck, thought maybe someone here already added that code to a private repo.

  • There are two problems here:

    1. Up to release there was a problem with multiple data subscriptions with IBBroker, where the subscription request was sent multiple times for each symbol ( see my post here ).

    This issue is now fixed in

    1. The second issue is with historical data requests in general - this has nothing to do with Backtrader code - it is IB limitation (and it has changed few times through the history of this API ) - unless you are expecting automatic throttling to be implemented in Backtrader itself.

    It seems @bigdavediode has some fix in his own repo, but no PR was submitted to the main repo.

    As for IbPy - AFAIK this library wasn't updated for a long time already - so there is no way to enable the "+PACEAPI" IB connection option right now.

  • @vladisld thanks for your response!
    i will use @bigdavediode workaround for now until i will find a permanent solution / develop my own feed for different data provider.

  • Thank you to everyone in the community for all the help/tips.
    I am having issues getting live data from IB for about 15 instruments. I am backfilling from local csv data.
    I would like to implement @bigdavediode solution, but I am not sure where the posted code should go..
    Can anyone post a more detailed example?

    I am interested in 5 minute data, being replayed to 30min.
    Maybe there is a better way I can access it?
    Here is my code:

    cerebro = bt.Cerebro()
    signal.signal(signal.SIGINT, createCerebroStopSignalHandler(cerebro.runstop))
    store = bt.stores.IBStore(port=7497)
    watchlist = ['FTEC','FHLC','FREL','FDIS','FUTY','FNCL','FSTA','FCOM','FENY','FIDU','FMAT','QQQ','VIG','JETS','VIOO']
    for sym in watchlist:
        pricefile = os.path.abspath('..') + f'\\data\\prices\\{sym}.csv'
        localData = MyDataReaderIBHistorical(dataname=pricefile, timeframe=bt.TimeFrame.Minutes, fromdate=prestartdate)
        data = store.getdata(dataname=sym, qcheck=2.0, backfill_from=localData)
        data.resample(timeframe=bt.TimeFrame.Minutes, compression=5)
        cerebro.replaydata(data, timeframe=bt.TimeFrame.Minutes, compression=30, name=sym)
    cerebro.addstrategy(MeanReversionMaster_Multi, startDate=startDate, silent=False, notify=True, longOnly=False, liveTrader=True)
    strat =, stdstats=False)

Log in to reply