Interactive Brokers Multiple reqMktData
esteve last edited by
I have a stategy that attaches to multiple IB symbols (10 aprox). However after backfilling from a file and connecting to IB I get the following error:
Max rate of messages per second has been exceeded
Is it possible that backtrader is launching too many requests to TWS ?
I have been looking into backtrader code and I believe self.ib.reqMktData is being called too many times. N^2 times where N is the number of symbols/datas.
At _runnext in cerebro.py d.next is called for each datas (N).
cerebro.py def _runnext(self, runstrats): ... for d in datas: qlapse = datetime.datetime.utcnow() - qstart d.do_qcheck(newqcheck, qlapse.total_seconds()) drets.append(d.next(ticks=False))
d.next ends up calling startdatas from ibsotre.py get called. In startdatas we have :
ibstore.py def startdatas(self): for data in self.datas: t = threading.Thread(target=data.reqdata)
Which calls ibdata.py 's reqdata, which finally does:
self.qlive = self.ib.reqMktData(self.contract)
Therefore we have two for loops on self.datas.
I have been able to avoid the max request problem by adding a timer.sleep at startdatas after waiting for the threads. But I believe there should be a better way.
This is my first post and I am not familiar with the platform. I hope I am not too off and this has not been covered in some other post... My guess is that self.ib.reqMktData should only be done once per symbol. Am I missing something ?
My guess is that threading.Thread is a constructor only that doesn't start the thread until .start(). And start() looks to be triggered by a timer.
However when downloading many symbols (37) I also receive the 'Max rate of messages per second exceeded' error. A puzzle.
I've encountered the similar issue and it seem to be a bug in backtrader code. In my case the 'minimal fix' was to just patch the ibdata.py 'reqdata' method to issue the subscription request only in case the feed status wasn't _ST_LIVE. Please see my post here. However a more proper fix should avoid multiple feed initializations altogether (it's more involved though).
Interesting, and I tried your solution unfortunately the combination of my historical data requests combined with live requests still jointly caused a pacing error. I've created two solutions to throttling the connection, one ugly but simple, and one elegant but complex.
The first, quite evil, solution is to simply wait one second every 50 transactions to ibstore/ibdata. Then call this routine _throt_delay before each ib request in ibstore/ibdata. Terrible, but on the bright side it becomes mathematically impossible to exceed IB's pacing restriction. And for most purposes this one second delay every once in a while is acceptable. The global is due to the the fact that both historical and live must add to the throttle count by calling _throt_delay before data request calls that come from a variety of routines in ibstore/ibdata.
reqcount = 0 #throttle limits reqlmt = 50 ... @staticmethod def _throt_delay(): global reqcount, reqlmt if reqcount >= reqlmt - 1: reqcount = 0 sleep(1) reqcount += 1
The second, more elegant solution, is to throttle nothing unless it comes to the edge of maximum allowed transaction requests, and throttle the minimum at that point, and clear out the count as time passes. This routine also exploits the fact that IB only recalculates every four seconds or so, so up to 250 requests have no delay and beyond that only 1/50s per request:
reqcount = 0 #throttle limits lastreqtime = time.time() reqlmt = 50 ... @staticmethod def _throt_delay(): global reqcount, lastreqtime throttime = 1 / reqlmt # throttle delay in fraction of seconds reqcount += 1 # add one throt period reqcount -= int((time.time() - lastreqtime) / throttime) # Decrement count by time passed: reqcount = max(reqcount, 0) # keep it above zero lastreqtime = time.time() # sleep it off if the pacing is too fast... if reqcount >= 4 * reqlmt - 1: sleep(throttime)
Now understandably Daniel R. would find the sleep abhorrent, although it's called minimally and with the absolute minimum period required, but this could be substituted with a thread blocker/barrier/thread primitive if appropriate. However the minimal and rare sleep avoids any concurrency issues with the threads just fine.
the combination of my historical data requests combined with live requests still jointly caused a pacing error
That sounds strange, given that you just subscribe for approx 10 symbols. It would be interesting to take a look at the TWS API logs to see what is really happening.
My strategy subscribes to about 37 symbols which backfill_from a pandascsv along with live trading. The pandascsv has it's own non-ibstore historical request mechanism. Thus the occasional throttling required.
My strategy subscribes to about 37 symbols
It still hard to explain the "Max rate of messages per second has been exceeded" (error 100) without looking at the TWS API logs. AFAIK, TWS API has 50 msg/sec limitation averaged over 5 sec (i.e 250 / 5sec ) in addition to 60 historical data requests within 10 min and some other more obscure limitations (https://interactivebrokers.github.io/tws-api/historical_limitations.html)
So although I agree that throttling mechanism should be in place ( I'd argue if it needs to be implemented in backtrader or in the lower component - ibpy, like the one implemented ib_insync library), I would still first make sure that there is no excessive requests generated by the current backtrader code by looking at the TWS API logs.
Despite the IB documentation, my impression is that that the IB API waives the limit for most historical data request cases: "IB has a limitation, 1 historical data request per 10 seconds. According to this: https://groups.io/g/insync/topic/24001598?p=Created,,,20,2,0,0::,,,0,0,0,24001598 "Currently IB waives that limit for bar sizes larger than 30 seconds." However, it does seem to add to the 50/s total request limit pacing. And that was my problem at the time. Live + historical = pacing error.
So I ran some tests on historical data requests just now and pounded IB with data requests for historical 30 min bars, and 1 D bars and was not able to trip the 60 historical requests per ten minute period, nor the 1 data request every 10 seconds. So it's likely that the above statement that the limit is waived for bar sizes larger than 30 seconds is correct. If someone else wants to test 30 second or less bars that would be interesting.
As for the sliding window request limit your statement of 250/5 secs is almost certainly correct although there is some mention on the web that it's a 4 second window. Thus my _throt_delay code assumes four to be conservative.
Not sure if the throttling code could be put into ibpy without more understanding of its structure.
Just to bring one more point regarding the throttling mechanism. Starting from v.974 of TWS API there is a '+PACEAPI' connection option that is effectively pacing the incoming messages. Quoting (https://www.interactivebrokers.com/en/index.php?f=5061):
API messages sent at a higher rate than 50/second can now be paced by TWS at the 50/second rate instead of potentially causing a disconnection. This is now done automatically by the RTD Server API and can be done with other API technologies by invoking SetConnectOptions("+PACEAPI") prior to eConnect.
Unfortunately I'm not sure IbPy library is updated to include the SetConnectOptions method. AFAIU the IbPy is just generating its source code from the official TWS Java API (by translating it to Python), but it hasn't been updated to the latest version yet. This is probably one of the reasons to directly use the official TWS Python API instead of IbPy.