For code/output blocks: Use ``` (aka backtick or grave accent) in a single line before and after the block. See: http://commonmark.org/help/

Example of adding live-data to static in strategy?



  • The summary you write is correct.

    The system errors in the indicator with exactbars set to anything other than 0. Debugger shows that even though the len() of self.dval8 is 756 (or 757 if forcing that with addminperiod()) (easily more than the 252 values needed here) and the .get shows that there is actually only 8 values available to it.

     File "/home/inmate/backtrader/indicators/indicator.py", line 27, in next
        value = rank(self.dval8.get(size=252, ago=0), self.dval8[0])
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/back
    trader/linebuffer.py", line 182, in get                                           
        return list(islice(self.array, start, end))
    ValueError: Indices for islice() must be None or an integer: 0 <= x <= sys.max
    size.                                                              
    

    Setting exactbars to 0 results in following error:

     File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/back
    trader/cerebro.py", line 1166, in _runnext                                        
        dt0 = min((d for i, d in enumerate(dts)
    ValueError: min() arg is an empty sequence
    

  • administrators

    len(X) doesn't tell you how many values are in the buffer. It tells you how many values have been seen/produced.

    The actual internal buffer length can be found out through: self.dval8.buflen()

    The problem is clearer now: addminperiod will not magically increase the internal buffer length of dval8.

    From your thread about porting a Pandas indicator:

    dval8 = dval7 * 0.6 + dval7(-1) * 0.25
    

    And dval7 also goes back to dval6(-1). The chain of -1s end up adding a total of 8 to dval8 which is what you see. The 756 (or 252) is applied to the output which the indicator produce and not the values which are being calculated in the indicator. The only dependency dval8 has, is the chain up to dval1

    The indicator ends with:

    self.lines.ind = dval8
    

    It is in self.lines.ind where you will find your 756 items (or 252)



  • One other possibly relevant info: I am running as follows:

    results = cerebro.run(runonce=False, tradehistory=True, exactbars=args.exactbars)
    

    If I run the system, whether using live data or just the static data with exactbars=0, my buflen() is full 756 in the indicator and errors in the other spot I have mentioned.

    If I run the system with exactbars set to value other than 0, my buflen() in the indicator is only 8.

    I am not clear on what I need to do to fix this.



  • Ok, moving forward...

    The indicator in question is implemented with both an __init__() and next(). I found I needed to calculate the values that need the slice of data in the next(). Perhaps there is a better way.

    If I add a bogus assignment in the __init__() of the indicator to assign self.dhist = self.dval8(-757), I now find that regardless of the value of exactbars, I always have a self.dval8.buflen() of greater than 756.

    After adding that bit of code to force the preservation of enough data to feed the indicator, the system now consistently errors out in the original place I reported:

     File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1166, in _runnext
        dt0 = min((d for i, d in enumerate(dts)
    ValueError: min() arg is an empty sequence
    

    I would have expected addminperiod() to make sure I had the specified amount of data in the buffer. This seems to be a rather hackish way to work around this problem. I must be doing something wrong there.


  • administrators

    addminperiod is consistent. It is your expectation of what it should do.

    As explained above if you have:

    
    class MyIndicator(bt.Indicator):
        lines = ('ind',)
    
        def __init__(self):
            self.addminperiod(756)
            ...
            ...
            dval8 = dval7 * 0.6 + dval7(-1) * 0.25
            self.lines.ind = dval8
    

    addminperiod is being called as self.addminperiod and as such it will affect the object to which it belongs. It doesn't even know that dval8 will be calculated.

    But self.lines.ind falls below the umbrella of MyIndicator and consequently is affected by the operation self.addminperiod(756).

    And the assignment self.lines.ind = dval8 makes sure that you will find 756 values under self.lines.ind

    In your implementation now (unknown here), you may try the following instead of having dhist:100:

    self.dval8.addminperiod(756)
    

    The code is there, but so far there wasn't any use case for this and it is still very unclear why you may need the last 756 values of a calculation. There must be something else which is being calculated.



  • In next() of the indicator, I am doing the following:

    self.lines.dvo[0] = _percent_rank(self.dval8.get(size=756, ago=0), self.dval8[0])
    

    So having solved this mystery, I continue to have the other error. I'll see about getting in the debugger there to see if I can determine which feed is the problem.



  • print(dts) shows the following when things error:

    ...
    [736326.0, 736326.0, 736326.0, 736321.0]
    [736328.0, 736328.0, 736328.0, 736326.0]
    ***** DATA NOTIF: DELAYED
    ***** DATA NOTIF: DELAYED
    [None, None, None, 736327.0]
    


  • I've rewritten the code in question with something a bit more understandable by my very green python brain and to enable dropping into the debugger at the appropriate time. (original code is commented out)

    Here is what I have:

                if d0ret:
                    dts = []
                    for i, ret in enumerate(drets):
                        dts.append(datas[i].datetime[0] if ret else None)
    
                    # Get index to minimum datetime
                    if onlyresample or noresample:
                        dt0 = min((d for d in dts if d is not None))
                    else:
                        z = []
                        for i, d in enumerate(dts):  
                            if d is not None and i not in rsonly:
                                z.append(d)
                        if len(z) > 0:
                            dt0 = min(z) 
                        else:
                            import pdb; pdb.set_trace()
    #                    dt0 = min((d for i, d in enumerate(dts)
    #                               if d is not None and i not in rsonly))
    
                    dmaster = datas[dts.index(dt0)]  # and timemaster
    

    By avoiding the enumeration of None values in dts[] I get past that point and error out as shown below:

     File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1177, in _runnext
        dmaster = datas[dts.index(dt0)]  # and timemaster
    ValueError: 736328.0 is not in list
    
    /cerebro.py(1177)_runnext()
    -> dmaster = datas[dts.index(dt0)]  # and timemaster
    (Pdb) dt0
    736328.0
    (Pdb) 
    


  • Not sure these frequent updates are helping to clarify my situation, but here goes:

    I've switched to not resampling the live data and have instead requested the timeframes that I need for the system. This allows me to avoid the code above that is causing the crash. By doing so, I am able to get to LIVE data as shown below, however, I am back to issue of not having self.position in next(). Stumped

    ***** DATA NOTIF: LIVE
    Self  len: 6231
    Data2 len: 4901
    Data3 len: 2108
    Data2 len == Data3 len: False
    Data2 dt: 2017-01-03 18:03:59.456999
    Data3 dt: 2017-01-02 19:00:00
    
    (Pdb) self.data_es
    <backtrader.feeds.ibdata.IBData object at 0x810b73a58>
    (Pdb) self.data_es.position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    (Pdb) self.data.position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    (Pdb) self.datas[0].position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    (Pdb) self.datas[1].position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    (Pdb) self.datas[2].position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    (Pdb) self.datas[3].position
    *** AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute 'position'
    

  • administrators

    Last first (because the other posts will probably require some research)

    It is surprising trying to picture how you've come up with:

    self.data.position  # and the self.datas[x].position
    

    Data feeds only carry that: data. They know nothing about what the strategy is doing with them and actual buy/sellactions are executed in the broker

    How to get it:

    Most samples work with a single data feed and the code in the strategy has something like:

    class Strategy(bt.Strategy):
        ...
    
        def next(self):
            if self.position:  #
                do_something()
    

    self.position will give you the current position (priceand size) for self.data0 (aka self.data).

    The boolean test above will be done against the size part of the position.

    In the background self.position is a property which is aliased to self.getposition() (with the default signature data=None) and this in turn goes to the broker and retrieves the position.

    To retrieve the position for a given data (for example data1) use:

    self.getposition(data=self.data1)  # or self.getposition(data=self.datas[1])
    


  • The strategy I am working on is attempting to use self.position and when backtesting with the static data, that all works as expected.

    It is when I am trying to add live data, and backfill_from the static data that I am seeing this behavior. My digging at those other data structures was just an attempt in the debugger to find an initialized value that is somehow not getting carried forward for self.position.

    In this strategy, I am doing the following to set the proper? data for use in the strategy, depending on whether we are live execution or not. I am making an assumption that when using live data that is backfilled using the backfill_from parameter, that I need to use these live datas sources when passing data to indicators or executing trades.

        def __init__(self):
            # To keep track of pending orders and buy price/commission
            self.order = None
            self.datastatus = False
            if self.p.live:
                self.data_es = self.data2
                self.data_spy = self.data3
            else:
                self.data_es = self.data0
                self.data_spy = self.data1
    

    I'm then using these datas to execute trades within the strategy against the appropriate data. In this case self.data_es.

    Still confused and apologies for confusing you as to what I am doing.
    Please let me know what I can do to help clear the fog I have created.


  • administrators

    No apologies are needed. You are the putting the debugging pieces in place.

    Let's comment on the approach you quote:

        def __init__(self):
            # To keep track of pending orders and buy price/commission
            self.order = None
            self.datastatus = False
            if self.p.live:
                self.data_es = self.data2
                self.data_spy = self.data3
            else:
                self.data_es = self.data0
                self.data_spy = self.data1
    

    This is confusing because the approach taken by backtrader is to have code which should run unmodified no matter if you are backtesting with static data or live data (having live data doesn't imply sending orders to a real broker, the broker simulation can still be used)

    Quoting from further above how the data feeds are created and added to the system (some parameters removed for brevity):

    # Parse static ES data file of daily OHLCV
    data0 = bt.feeds.PandasData(dataname='my-es-file') 
    
    # Parse SPY data file of daily OHLCV
    data1 = bt.feeds.PandasData(dataname='my-spy-file')
    
    IBDataFactory = ibstore.getdata
    
    # ES Live Data
    data2 = IBDataFactory(dataname=symbol, backfill_from=data0, timeframe=bt.Timeframe.Seconds, backfill_from=data0)
    cerebro.resampledata(data2, timeframe=bt.TimeFrame.Minutes, compression=1)
    
    # SPY Live Data
    data3 = IBDataFactory(dataname=symbol, backfill_from=data1)
    cerebro.resampledata(data3, timeframe=bt.TimeFrame.Days, compression=1,)
    

    At the end of the day, you have only 2 data feeds in the system (with indices 0 and 1)

    • self.datas[0] (aka self.data or self.data0) will refer to the ES Live Data, which has been backfilled_from from a pandas.DataFrame created from the file 'my-es-file'
    • self.datas[1] (aka self.data1) will refer to the SPY Live Data, which has also been backfilled_from from a pandas.DataFrame created from the file 'my-spy-file'

    As such, the logic the data feeds which you will track in the strategy should look like this:

    def __init__(self):
        # To keep track of pending orders and buy price/commission
        self.order = None
        self.datastatus = False
        self.data_es = self.data0
        self.data_spy = self.data1
    

    With no need for a differentiation between being live or not. If you are only backtesting and not sourcing from IB, the code when adding the data feeds will look something like this:

    data0 = bt.feeds.PandasData(dataname='my-es-file')  # Parse static ES data file of daily OHLCV
    data1 = bt.feeds.PandasData(dataname='my-spy-file')  # Parse SPY data file of daily OHLCV
    
    cerebro.resampledata(data0, timeframe=bt.TimeFrame.Minutes, compression=1)
    cerebro.resampledata(data1, timeframe=bt.TimeFrame.Days, compression=1,)
    

    By using the right variable naming the code can be shared between the live version and the non-live version when loading the data.

    Note: it seems surprising that you are looking at the ES in minutes and at the SPY in days, but there is for sure a good use case for it.


  • administrators

    @RandyT said in Example of adding live-data to static in strategy?:

    In next() of the indicator, I am doing the following:

    self.lines.dvo[0] = _percent_rank(self.dval8.get(size=756, ago=0), self.dval8[0])
    

    So having solved this mystery, I continue to have the other error. I'll see about getting in the debugger there to see if I can determine which feed is the problem.

    Let's try not to forget the 756 problem. From the naming, it seems that the percentile rank of the current value of self.dval8 (at index [0]) is being calculated, in the context of the last 756 values produced by itself.

    The easiest approach to the problem is:

    class PercentRank(bt.indicators.PeriodN):
        lines = ('percentrank',)
        
        def next(self):
            self.lines.percentrank[0] = _percent_rank(self.data.get(size=self.p.period), self.data[0])
    

    Your dvo line becomes then in __init__

    def __init__(self):
        self.lines.dvo = PercentRank(self.dval8, period=756)
    

    And you can forget about having to manually use self.addminperiod(756), because all dependencies will be accounted for.

    Of course your use case may be a lot more complicated.


  • administrators

    @RandyT said in Example of adding live-data to static in strategy?:

    print(dts) shows the following when things error:

    ...
    [736326.0, 736326.0, 736326.0, 736321.0]
    [736328.0, 736328.0, 736328.0, 736326.0]
    ***** DATA NOTIF: DELAYED
    ***** DATA NOTIF: DELAYED
    [None, None, None, 736327.0]
    

    This message might have the key to try to understand where the problem could be. The code which is triggering the error is (in a single line, because the exception trace only shows the 1st line of the multi-line statement):

                        dt0 = min((d for i, d in enumerate(dts) if d is not None and i not in rsonly))
    

    Your 1st three data feeds are delivering None (no new data is available for delivery) and the 4th indicates that a new data point is available. This is apparently being discarded because i not in rsonly is False, i.e.: *the 4th data in the system is a resampled one and is being discarded as a sync source for the system, because non-resampled datas are available.

    Let's try to summarize:

    • You were adding at the beginning 4 datas

      • 2 with adddata, but this was wrong because those were meant as backfillers for the live datas, to warm up the indicators.
    • 2 with resampledata (from the IBDataFactory)

    Following the explanation in the last posts, you should have:

    • Only 2 with resampledata (which resample IBDataFactory feeds, which in turn use backfill_from applying the 2 pandas.DataFrames)

    For you this should be the solution.

    Something I missed before in one of your latest messages and which is important to analyze:

    I've switched to not resampling the live data and have instead requested the timeframes that I need for the system

    This won't work with IB. Simply setting the timeframe/compression, forces backfilling in that combination, but IB will still only send you data snapshots every 250ms (or 5-secs with RTBars) and not the requested timeframe/compression.

    See this part of the documentation:

    Bottomline: you need to resample to desired resolution

    This can be an area of improvement, because this may seem like unnatural behavior to some and a directly resampled data could be returned (this would break the possibility to resample to a larger timerame, see note below)

    Final summary

    • Create 2 backfillers with bt.feeds.PandaDatas

    • Create 2 data feeds with IBDataFactory putting the bakfillers in backfill_from and the desired timeframe / compression resolution (to have consistent backfilling)

    • Add the latter 2 data feeds to the system with cerebro.resampledata with the same timeframe / compression resolution as chosen before

      Greater would work too, but it seems more sensible to directly request from IB the right resolution.

    Hope this helps



  • @backtrader said in Example of adding live-data to static in strategy?:

    Quoting from further above how the data feeds are created and added to the system (some parameters removed for brevity):

    # Parse static ES data file of daily OHLCV
    data0 = bt.feeds.PandasData(dataname='my-es-file') 
    
    # Parse SPY data file of daily OHLCV
    data1 = bt.feeds.PandasData(dataname='my-spy-file')
    
    IBDataFactory = ibstore.getdata
    
    # ES Live Data
    data2 = IBDataFactory(dataname=symbol, backfill_from=data0, timeframe=bt.Timeframe.Seconds, backfill_from=data0)
    cerebro.resampledata(data2, timeframe=bt.TimeFrame.Minutes, compression=1)
    
    # SPY Live Data
    data3 = IBDataFactory(dataname=symbol, backfill_from=data1)
    cerebro.resampledata(data3, timeframe=bt.TimeFrame.Days, compression=1,)
    

    Note: it seems surprising that you are looking at the ES in minutes and at the SPY in days, but there is for sure a good use case for it.

    @backtrader DRo, I sincerely thank you for the amount of time spent responding to my questions and getting me on track here.

    One point from the above I want to clarify to make sure I am not causing other problems there.

    • The static data feeds I am using are daily data both for ES and SPY.
    • I am sourcing live data for the trading process only and am using daily SPY to feed the indicators and it is my intent to execute trades against the minute frequency (or faster) ES data.
    • I want to trigger an entry or exit signal based on the daily close on SPY and execute the trade on ES in the next minute after the close of the SPY. (since the futures market is still open for the next 15min at least)

    I hope that explains in more detail what I am doing there and I am curious if I am causing part of my problem because I am mixing timeframes with the static data being daily and the live data being a different timeframe.

    Thanks again for your help.



  • @backtrader

    I am finding that without an .adddata() for the static data feeds, I error out with the following.

        results = cerebro.run(runonce=False, tradehistory=True, exactbars=args.exactbars)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 809, in run
        runstrat = self.runstrategies(iterstrat)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 933, in runstrategies
        self._runnext(runstrats)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 1152, in _runnext
        drets = [d.next(ticks=False) for d in datas]
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feed.py", line 339, in next
        ret = self.load()
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feed.py", line 411, in load
        _loadret = self._load()
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feeds/ibdata.py", line 522, in _load
        if not self.p.backfill_from.next():
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feed.py", line 339, in next
        ret = self.load()
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feed.py", line 411, in load
        _loadret = self._load()
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/feeds/pandafeed.py", line 204, in _load
        self._idx += 1
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/lineseries.py", line 429, in __getattr__
        return getattr(self.lines, name)
    AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute '_idx'
    

    If I add back the .adddata() calls for static data, (and use your other recommendations to use the .datas[0] and .datas[1]) I error out with the previously reported issue as follows:

    Traceback (most recent call last):
      File "systems/system.py", line 293, in <module>
        runstrategy()
      File "systems/system.py", line 133, in runstrategy
        results = cerebro.run(runonce=False, tradehistory=True, exactbars=args.exactbars)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 809, in run
        runstrat = self.runstrategies(iterstrat)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 933, in runstrategies
        self._runnext(runstrats)
      File "/Users/randy/.virtualenvs/backtrader-2.7/lib/python2.7/site-packages/backtrader/cerebro.py", line 1166, in _runnext
        dt0 = min((d for i, d in enumerate(dts)
    ValueError: min() arg is an empty sequence
    

  • administrators

    • The static data feeds I am using are daily data both for ES and SPY.

    This is clear. But daily data cannot be used to warm up the indicators for minute data. The reality is that the term known as resampling should really be named: upsampling. Because the data you pass cannot be downsampled (Note: you can break an OHLC bar in 3 smaller samples: O + HL and C, but that won't turn daily data into minute data)

    • I am sourcing live data for the trading process only and am using daily SPY to feed the indicators and it is my intent to execute trades against the minute frequency (or faster) ES data.

    Two options here:

    1. You have 2 data feeds in the system.

      • x-minutes data feed which is made up of the live data resampled to x-minutes
      • 1-day data feed, which is made up of the backfilling data, plus the live data resampled to 1-day

      You calculate the indicators on the daily (because your backfilling data is daily) and execute orders on the x-minutes

    2. You have a 1 data feed in the system

      • x-minutes data feed which is made up of the live data resampled to x-minutes and backfilled with the 1-day static data

      This won't work. Because the 1-day static data cannot be downsampled to x-minutes. The backfilling makes no sense at all.

    • I want to trigger an entry or exit signal based on the daily close on SPY and execute the trade on ES in the next minute after the close of the SPY. (since the futures market is still open for the next 15min at least)

    This is the clear part and should work. But the concept in play is *re

    1. Create the backfiller

    2. Create a live data feed for the SPY with timeframe=bt.TimeFrame.Days adding the backfill_from=backfiller

      Add the closing session time with sessionend=datetime.time(16, 00) (Assuming the SPY trades in the EST timezone)

    3. Add it to the system with cerebro.resample and timeframe=bt.TimeFrame.Days

    With this you have the SPY in a 1-day timeframe and you can check when the day is delivered because the daily bar will only be delivered once the session closing time has been met (the len(self.dataX) increases)

    Now: you shouldn't need backfilling for the ES because it is only the target of the order and not the data on which you want to run indicators (assumption). The following would be the best approach:

    1. Create a live data feed for the ES with timeframe=bt.TimeFrame.Minutes and no backfill_from

    2. Add it to the system with cerebro.resample and timeframe=bt.TimeFrame.Minutes

      Note: you may even consider disabling backfill_start for the ES because the signals are given by the SPY
      Note 2: as pointed out above, it makes no sense to backfill_from with daily data, because the stream is using a minute resolution.

    Once you have seen the new SPY daily bar. The next call to the next method of your strategy should theoretically happen at most 1 minute later (the ES is liquid, so there will for sure be values for a 1-min resampled bar) and you can issue the order.

    Not something that has been tried, but the operation mode is supported.

    Time Management Note: For the correctness of the sessionend parameter above, one should know in which timezone the exchange for the SPY actually operates (for example the ES-Mini is in the CME exchange and operates in the CST timezone)



  • @backtrader DRo, hope I gave you a bit of a break on this while I was on holiday. :smile:

    Thanks for the detailed info on how this should work. Unfortunately, I am struggling to get a successful result.

    I've taken the approach to try to create the most simple example I can to see if we can sort out these issues. There seem to be multiple, or one is related to another. In particular, the problem seems to be related to the Pandas dataframe that I am creating, or some problem with my use of that data.

    I'm including below the code that I have used to recreate this. Using the YahooCSV data, it seems to work. If you comment out the YahooCSV data and use Pandas dataframe instead, it errors. Error is below. This first error is solved if I do an .adddata() on the backfill_from source. But as you have said, I should not need to do that for the backfill.

    Error thrown with Pandas formatted data:

    Traceback (most recent call last):
      File "systems/tests/test.py", line 159, in <module>
        cerebro.run()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 809, in run
        runstrat = self.runstrategies(iterstrat)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 933, in runstrategies
        self._runnext(runstrats)
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1152, in _runnext
        drets = [d.next(ticks=False) for d in datas]
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/cerebro.py", line 1152, in <listcomp>
        drets = [d.next(ticks=False) for d in datas]
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 339, in next
        ret = self.load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 411, in load
        _loadret = self._load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feeds/ibdata.py", line 522, in _load
        if not self.p.backfill_from.next():
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 339, in next
        ret = self.load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feed.py", line 411, in load
        _loadret = self._load()
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/feeds/pandafeed.py", line 204, in _load
        self._idx += 1
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineseries.py", line 429, in __getattr__
        return getattr(self.lines, name)
    AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute '_idx'
    

    Test code I am using:

    from __future__ import (absolute_import, division, print_function,
                            unicode_literals)
    
    import datetime as dt
    import os.path
    import sys
    import pandas as pd
    
    # Import the backtrader platform
    import backtrader as bt
    import backtrader.feeds as btfeeds
    
    
    # Create a Stratey
    class TestStrategy(bt.Strategy):
        params = (
            ('maperiod', 15),
        )
    
        def log(self, txt, dt=None):
            ''' Logging function fot this strategy'''
            dt = dt or self.datas[0].datetime.date(0)
            print('%s, %s' % (dt.isoformat(), txt))
    
        def __init__(self):
            self.dataclose = self.datas[0].close
    
            # To keep track of pending orders and buy price/commission
            self.order = None
            self.buyprice = None
            self.buycomm = None
    
            # Add a MovingAverageSimple indicator
            self.sma = bt.indicators.MovingAverageSimple(self.datas[0], period=self.params.maperiod)
    
        def notify_order(self, order):
            if order.status in [order.Submitted, order.Accepted]:
                # Buy/Sell order submitted/accepted to/by broker - Nothing to do
                return
    
            # Check if an order has been completed
            # Attention: broker could reject order if not enougth cash
            if order.status in [order.Completed, order.Canceled, order.Margin]:
                if order.isbuy():
                    self.log(
                        'BUY EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' %
                        (order.executed.price,
                         order.executed.value,
                         order.executed.comm))
    
                    self.buyprice = order.executed.price
                    self.buycomm = order.executed.comm
                else:  # Sell
                    self.log('SELL EXECUTED, Price: %.2f, Cost: %.2f, Comm %.2f' %
                             (order.executed.price,
                              order.executed.value,
                              order.executed.comm))
    
                self.bar_executed = len(self)
    
            # Write down: no pending order
            self.order = None
    
        def notify_trade(self, trade):
            if not trade.isclosed:
                return
    
            self.log('OPERATION PROFIT, GROSS %.2f, NET %.2f' %
                     (trade.pnl, trade.pnlcomm))
    
        def next(self):
            # Simply log the closing price of the series from the reference
            self.log('Close, %.2f' % self.dataclose[0])
    
            # Check if an order is pending ... if yes, we cannot send a 2nd one
            if self.order:
                return
    
            # Check if we are in the market
            if not self.position:
    
                # Not yet ... we MIGHT BUY if ...
                if self.dataclose[0] > self.sma[0]:
                    # current close less than previous close
    
                    # BUY, BUY, BUY!!! (with default parameters)
                    self.log('BUY CREATE, %.2f' % self.dataclose[0])
    
                    # Keep track of the created order to avoid a 2nd order
                    self.order = self.buy()
    
            else:
    
                # Already in the market ... we might sell
                if self.dataclose[0] < self.sma[0]:
                    # SELL, SELL, SELL!!! (with all possible default parameters)
                    self.log('SELL CREATE, %.2f' % self.dataclose[0])
    
                    # Keep track of the created order to avoid a 2nd order
                    self.order = self.sell()
    
    
    if __name__ == '__main__':
        # Create a cerebro entity
        cerebro = bt.Cerebro()
    
        # Add a strategy
        cerebro.addstrategy(TestStrategy)
    
        # Datas are in a subfolder of the samples. Need to find where the script is
        # because it could have been called from anywhere
        modpath = os.path.dirname(os.path.abspath(sys.argv[0]))
    
        # Parse CSI SPY data file
        staticdata0path = os.path.join(modpath, '../../datas/SPY-yahoo.csv')
        # staticdata0 = btfeeds.YahooFinanceCSVData(dataname=staticdata0path)
    
        staticdata0frame = pd.read_csv(staticdata0path,
                                       header=0,
                                       skiprows=0,
                                       parse_dates=True,
                                       index_col=0)
        staticdata0 = bt.feeds.PandasData(dataname=staticdata0frame,
                                          volume='volume',
                                          openinterest=None)
        #cerebro.adddata(staticdata0)
    
        storekwargs = dict(
            host='127.0.4.1',
            port=7465,
            timeoffset=True,
            reconnect=3,
            timeout=3,
            _debug=True
        )
    
        ibstore = bt.stores.IBStore(**storekwargs)
    
        broker = ibstore.getbroker()
    
        cerebro.setbroker(broker)
    
        # SPY Live Data Timeframe 1 Day
        data0 = ibstore.getdata(dataname='SPY-STK-SMART-USD', backfill_from=staticdata0,
                                timeframe=bt.TimeFrame.Days, compression=1,
                                sessionend=dt.time(16, 00))
        cerebro.resampledata(data0, timeframe=bt.TimeFrame.Days, compression=1)
    
        # Add a FixedSize sizer according to the stake
        cerebro.addsizer(bt.sizers.FixedSize, stake=1)
    
        # Set the commission - 0.1% ... divide by 100 to remove the %
        cerebro.broker.setcommission(commission=0.001)
    
        # Print out the starting conditions
        print('Starting Portfolio Value: %.2f' % cerebro.broker.getvalue())
    
        # Run over everything
        cerebro.run()
    
        # Print out the final result
        print('Final Portfolio Value: %.2f' % cerebro.broker.getvalue())
    

  • administrators

    It seems like you are stretching the code to reach some corners which were not visited that often before. Let's have a look later.



  • I've gotten a bit further here in that it seems that we just need to initialize _idx in __init__() at line 154 of feeds/pandafeed.py.

    Clearing that up, I run into an issue with timezone. Not clear to me where or when a concept of timezone should be applied to backfilled data.

    backtrader/feed.py", line 428, in load
        if self._tzinput:
      File "/home/inmate/.virtualenvs/backtrader3/lib/python3.4/site-packages/backtrader/lineseries.py", line 429, in __getattr__
        return getattr(self.lines, name)
    AttributeError: 'Lines_LineSeries_DataSeries_OHLC_OHLCDateTime_Abst' object has no attribute '_tzinput'