@polr said in HTTP Rest Datafeed:
My question is this -- if I want to do a test across WTI for example from 1995 to 2010 I have 180 data feeds to enter (12 * 15). Then if I want to test the portfolio across 25 products I get into the thousands of feeds quite quickly. Am I correct in my understanding?
12 * 15 seems quite right. If you have 25 different assets and you want to backtest of all them at the same time, yes, you will have 12 * 15 * 25
If
@polr said in HTTP Rest Datafeed:
I have a function that takes a datetime object as a parameter and returns a list of the front, first roll, second roll etc for the given date, among other information. I was wondering if it was possible to provide a function as input to a data feed such that the data feed provided to the Strategy object is in essence a symbol and a rolling function (in other words, it doesn't yet hold data but knows what to do when the strategy iteratively provides it a date during the implementation of the strategy.
Such a data feed would be possible, but it doesn't exist at the moment. You simply have to override _load, and when called provide the data. From inside _load you can decide where the next batch of data is coming from.
In any case you end with the same problem as above: you need to be able to fetch 12 * 15 data feeds. Whereas above you do it in a loop/list comprehension (quite easily if the naming is consistent), you pass the complexity into a different beast if you make a data feed.
Example with a list comprehension
basename = `myfeed-{}.txt`
dfeeds = [bt.feeds.GenericCSVData(dataname=basename.format(i) for i in range(1995, 2011)]
cerebro.rollover(*dfeeds)
That should do the trick (you may obviously do extra things, like setting the format of the datetime field or others and it may be better to unroll the comprehension and have a loop with a couple extra lines)