For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/

Example of adding live-data to static in strategy?



  • Just a side note here:

    I have tried to revert this system to using live data only and getting the historical data needed to satisfy the lookback requirement of the indicators from the IB historical data. I've found that it will not work unless I call .adddata() for each instrument.

    The system will then run for some period of time (a few minutes) before erroring out in a previously reported spot.

     File "backtrader/cerebro.py", line 1166, in _runnext
        dt0 = min((d for i, d in enumerate(dts)
    ValueError: min() arg is an empty sequence
    

    Without the .adddata() call, I get a divide-by-zero error in backtrader/linebuffer.py

    ...
    227.28 227.28
    227.28 454.56
    0.50 0.00
    0.50 0.00
    0.50 0.00
    0.50 0.00
    0.50 0.00
    0.50 0.50
    227.28 227.28
    227.28 227.28
    0.00 0.00
    Traceback (most recent call last):
      File "systems/dv-swing/system-dv-swing.py", line 296, in <module>
        runstrategy()
      File "systems/dv-swing/system-dv-swing.py", line 136, in runstrategy
        results = cerebro.run(runonce=False, tradehistory=True, exactbars=args.exactbars)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 809, in run
        runstrat = self.runstrategies(iterstrat)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 933, in runstrategies
        self._runnext(runstrats)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1225, in _runnext
        strat._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py", line 296, in _next
        super(Strategy, self)._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineiterator.py", line 239, in _next
        indicator._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineiterator.py", line 239, in _next
        indicator._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/linebuffer.py", line 619, in _next
        self.next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/linebuffer.py", line 746, in next
        self[0] = self.operation(self.a[0], self.b[0])
    ZeroDivisionError: float division by zero
    

    @backtrader let me know if you would prefer I document this in github.


  • administrators

    @RandyT

    The _idx is due to lack of testing. Pandas was not a target for testing and the start phase of the backfill_from data feed is skipped. For many data feeds this may not be of importance at all, but PandasData initializes the internal index there, as you have found.

    Clearing that up, I run into an issue with timezone. Not clear to me where or when a concept of timezone should be applied to backfilled data.

    The problem boils down to the same thing: start was skipped. In most cases there is no timezone for the input, because the data timestamps are in UTC. But if they are in something else, they'd better be turned to UTC and that's the point of the input timezone (parameter: tzinput to the data feed)

    If you use the backfilling data as a precursor to live data with IB, you need to ensure your backfilling data is in UTC, because live market data is always delivered as UTC by TWS.

    Note: pushing an update to the development branch for the IBData and OandaData feeds, to ensure initialization of backfill_from data feeds happen.


  • administrators

    @RandyT said in Example of adding live-data to static in strategy?:

    Without the .adddata() call, I get a divide-by-zero error in backtrader/linebuffer.py

    The ZeroDivisionError is happening in an indicator and the output before the error (the data?) seems to show it's going to happen

    ...
    227.28 227.28
    0.00 0.00
    Traceback (most recent call last):
    ...
    

    Not knowing what your code does it is unclear why you have to use .adddata if you are resampling. Below you will find the command line execution and the output of a run done just now collecting two instruments with the ibtest.py sample. The relevant code in the sample to introduce the data feeds in the system.

        if args.replay:
            cerebro.replaydata(dataname=data0, **rekwargs)
    
            if data1 is not None:
                rekwargs['timeframe'] = tf1
                rekwargs['compression'] = cp1
                cerebro.replaydata(dataname=data1, **rekwargs)
    
        elif args.resample:
            cerebro.resampledata(dataname=data0, **rekwargs)
    
            if data1 is not None:
                rekwargs['timeframe'] = tf1
                rekwargs['compression'] = cp1
                cerebro.resampledata(dataname=data1, **rekwargs)
    
        else:
            cerebro.adddata(data0)
            if data1 is not None:
                cerebro.adddata(data1)
    

    As you may see from the execution line below, --resample is used and that means that only cerebro.resampledata is used. It has run for over 35 minutes with no issues (it has 2 SimpleMovingAverages, one on each data feed)

    Note: output abbreviated to avoid clutter.

    $ ./ibtest.py --resample --timeframe Minutes --compression 1 --data0 EUR.USD-CASH-IDEALPRO --data1 EUR.JPY-CASH-IDEALPRO
    
    Server Version: 76
    TWS Time at connection:20170111 11:51:37 CET
    --------------------------------------------------
    Strategy Created
    --------------------------------------------------
    Timezone from ContractDetails: EST5EDT
    Datetime, Open, High, Low, Close, Volume, OpenInterest, SMA
    ***** STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connection is OK:usfuture>
    ***** STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connection is OK:eufarm>
    ***** STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connection is OK:cashfarm>
    ***** STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connection is OK:usfarm.us>
    ***** STORE NOTIF: <error id=-1, errorCode=2104, errorMsg=Market data farm connection is OK:usfarm>
    ***** STORE NOTIF: <error id=-1, errorCode=2106, errorMsg=HMDS data farm connection is OK:euhmds>
    ***** DATA NOTIF: DELAYED
    ***** DATA NOTIF: DELAYED
    Data0, 0001, 736339.927083, 2017-01-10T17:15:00.000000, 1.0554, 1.0556, 1.0554, 1.05555, -1.0, 0, nan
    Data1, 0001, 736339.927083, 2017-01-10T17:15:00.000000, 122.125, 122.195, 122.125, 122.19, -1.0, 0, nan
    ***** STORE NOTIF: <error id=-1, errorCode=2106, errorMsg=HMDS data farm connection is OK:cashhmds>
    Data0, 0002, 736339.927778, 2017-01-10T17:16:00.000000, 1.05555, 1.05555, 1.0555, 1.05555, -1.0, 0, nan
    Data1, 0002, 736339.927778, 2017-01-10T17:16:00.000000, 122.19, 122.205, 122.185, 122.2, -1.0, 0, nan
    ...
    Data0, 0757, 736340.452083, 2017-01-11T05:51:00.000000, 1.052, 1.05205, 1.052, 1.05205, -1.0, 0, 1.05202
    Data1, 0757, 736340.452083, 2017-01-11T05:51:00.000000, 122.425, 122.425, 122.41, 122.42, -1.0, 0, nan
    ***** DATA NOTIF: LIVE
    ***** DATA NOTIF: LIVE
    Data0, 0758, 736340.452778, 2017-01-11T05:52:00.000000, 1.05205, 1.05215, 1.05205, 1.05215, 0.0, 0, 1.05204
    Data1, 0758, 736340.452778, 2017-01-11T05:52:00.000000, 122.42, 122.425, 122.415, 122.415, 0.0, 0, nan
    ...
    Data0, 0793, 736340.477083, 2017-01-11T06:27:00.000000, 1.05205, 1.0521, 1.05195, 1.052, 0.0, 0, 1.05194
    Data1, 0793, 736340.477083, 2017-01-11T06:27:00.000000, 122.31, 122.31, 122.29, 122.29, 0.0, 0, nan
    Data0, 0794, 736340.477778, 2017-01-11T06:28:00.000000, 1.05205, 1.0521, 1.0519, 1.05195, 0.0, 0, 1.05197
    Data1, 0794, 736340.477778, 2017-01-11T06:28:00.000000, 122.295, 122.31, 122.29, 122.295, 0.0, 0, nan
    
    


  • I've not had a chance to dig into this further, but with latest changes, I am erroring out as follows:

      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 339, in next
        ret = self.load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 411, in load
        _loadret = self._load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feeds/ibdata.py", line 480, in _load
        useRTH=self.p.useRTH, tz='GMT')
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/stores/ibstore.py", line 717, in reqHistoricalDataEx
        self.hisfmt[tickerId] = timeframe >= TimeFrame.Days
    AttributeError: 'IBStore' object has no attribute 'hisfmt'
    


  • @backtrader this appears to be a typo on line 717 of stores/ibstore.py



  • @backtrader Need a bit of clarification to the above regarding timezones, tzinput and IBData.

    The data I am using as backfill is daily data and has not time associated with the date index. Purely dates. I would assume that backtrader will do the right thing with that data in terms of managing time.

    Do I need to set tzinput for this?



  • This seems to be working as expected with one possible exception. On some starts of the system, it will fail with backtrace. Restarting it immediately with same parameters eventually takes hold. Not sure if this is potentially do to some lack of or unexpected response from IB.

    Here is the backtrace when this fails:

      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1225, in _runnext
        strat._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py", line 296, in _next
        super(Strategy, self)._next()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineiterator.py", line 236, in _next
        clock_len = self._clk_update()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py", line 285, in _clk_update
        newdlens = [len(d) for d in self.datas]
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/strategy.py", line 285, in <listcomp>
        newdlens = [len(d) for d in self.datas]
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineseries.py", line 432, in __len__
        return len(self.lines)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineseries.py", line 199, in __len__
        return len(self.lines[0])
    ValueError: __len__() should return >= 0
    

  • administrators

    @RandyT It is a typo. The code was fresh and meant for @skytrading54 who was looking at timezone issues.

    The code you hit was triggered with a disconnection or because you were feeding initial data with backfill_from. That part was still untested.

    Fixed and pushed.


  • administrators

    @RandyT The problem here is understanding what has been executed. The traces help only so much.



  • @backtrader I wanted to share where I ended up on setting up these data feeds. Decided not to share this in the other thread regarding "confusion" to avoid creating more confusion in that discussion. :smile:

    I have the following code to setup the two static data sources I am using and to use backfill_from to build the live feeds from these other sources. If I am not running on live data, I just use .adddata() to setup the static data sources for backtesting.

    Note: I added the time.sleep(5) after the setup of these live feeds which seems to have solved the problem mentioned above with failure to start occasionally. There is appears to be some delay in establishing these connections with IB and we apparently aren't blocking until that data is there or are getting some false indication that the feed is ready.

        # Parse static ES data file
        bfdata0path = os.path.join(modpath, args.bfdata0)
        bfdata0frame = pd.read_csv(bfdata0path, header=0, skiprows=0, parse_dates=True, index_col=0)
        bfdata0 = bt.feeds.PandasData(dataname=bfdata0frame, volume=' Volume', **dkwargs)
    
        # Parse static SPY data file
        bfdata1path = os.path.join(modpath, args.bfdata1)
        bfdata1frame = pd.read_csv(bfdata1path, header=0, skiprows=0, parse_dates=True, index_col=0)
        bfdata1 = bt.feeds.PandasData(dataname=bfdata1frame, volume=' Volume', openinterest=None)
    
        if args.live:
            storekwargs = dict(
                host=args.host,
                port=args.port,
                clientId=args.clientid,
                timeoffset=not args.no_timeoffset,
                reconnect=args.reconnect,
                timeout=args.timeout,
                notifyall=args.notifyall,
                _debug=args.debug
            )
    
            ibstore = bt.stores.IBStore(**storekwargs)
            broker = ibstore.getbroker()
            cerebro.setbroker(broker)
    
            # fetch minimum required data for ES [does this actually work?]
            fetchfrom = (dt.datetime.now() - timedelta(hours=7))
            # ES Futures Live data timeframe resampled to 1 Minute
            data0 = ibstore.getdata(dataname=args.live_es, fromdate=fetchfrom,
                                    timeframe=bt.TimeFrame.Minutes, compression=1)
            cerebro.resampledata(data0, name="ES-minutes", timeframe=bt.TimeFrame.Minutes, compression=1)
            data0.plotinfo.plot = False
    
            # SPY Live data timeframe resampled to 1 Day
            data1 = ibstore.getdata(dataname=args.live_spy, backfill_from=bfdata1,
                                    timeframe=bt.TimeFrame.Days, compression=1,
                                    sessionend=dt.time(16, 0))
            cerebro.resampledata(data1, name="SPY-daily", timeframe=bt.TimeFrame.Days, compression=1)
            data1.plotinfo.plot = False
    
            data2 = ibstore.getdata(dataname=args.live_es, backfill_from=bfdata0,
                                    timeframe=bt.TimeFrame.Days, compression=1)
            cerebro.resampledata(data2, name="ES-daily", timeframe=bt.TimeFrame.Days, compression=1)
    
            time.sleep(5)  # solves issue with data ingestion failure
        else:  # we are in backtest mode only
            cerebro.adddata(bfdata0, name='ES')
            cerebro.adddata(bfdata1, name='SPY')
    

  • administrators

    That time.sleep(5) shouldn't help at all.

    The connection to TWS will be 1st done when you call cerebro.run() which happens for sure after that time.sleep(5)

    But it is good if it helps.