@ab_trader said in Basic MultiTimeframe order:
@Paul-Park said in Basic MultiTimeframe order:
So nextstart() ensures that the higher time frame completes the bar before letting the program continue?
nextstart() has no relation to time frames.
Ahh I see where my confusion was now. Thank you very much.
I managed it to pass the error message with the symbol:
I just declared the symbol like this:
w, h = 2, len(sheet.index)
symbols = [[0 for x in range(w)] for y in range(h)]
The code is running now, but I think it stops anywhere, because if I plot the equity it just plotts it ones and than nothing happens and the code don't stop running.
What do you think about the 3 functions? Do you see something what is wrong?
@soulmachine Thanks for asking this question. I ran into issues with this a while ago and was to lazy to dig up the answer.
@vladisld is correct in how you put in the timeframes. Small caveat to his comment, the compression only works on intraday timeframes. So typically say 1min compression to 30 or 60 minutes.
cerebro.addanalyzer(bt.analyzers.SharpeRatio, timeframe=bt.TimeFrame.Minutes, compression=30, _name="mysharpe_1")
cerebro.addanalyzer(bt.analyzers.SharpeRatio, timeframe=bt.TimeFrame.Minutes, compression=60, _name="mysharpe_2")
mysharpe_1: -2.3099, mysharpe_2: -1.7835
If you try the same thing with say Weekly or Monthly timeframes, there's no impact on results.
cerebro.addanalyzer(bt.analyzers.SharpeRatio, timeframe=bt.TimeFrame.Weeks, compression=2, _name="mysharpe_1")
cerebro.addanalyzer(bt.analyzers.SharpeRatio, timeframe=bt.TimeFrame.Weeks, compression=4, _name="mysharpe_2")
mysharpe_1: -0.3472, mysharpe_2: -0.3472
For more information you can look in the docs or directly at the code.
@Quanliang-Xie said in Passing data into data feed one by one:
I want to test a single strategy using a series of csv files from a folder
There are several scenarios:
Running the same strategy on each csv file separately ( separate broker, account balance ) and sequentially (one after another):
In this case, just put the cerebro creating code inside your for loop. In the code above the same cerebro instance is used to host all the datas and strategies. Since the datas are hosted directly in cerebro instance and not in strategy - all the added datas are just accumulating in the same cerebro instance.
Same as above, only now run all the strategies simultaneously
Basically very similar - just put the code under for loop inside a separate function which will recieve the csv path as a parameter and try to use the 'multiprocessing' python capabilities to run it.
Alternatively, you may try to use the Backtrader optstrategy as @run-out has suggested, however, this is more involved and wasn't designed for your particular use case.
Running all the csv files against the same broker account (sharing the account cash)
In this case you just need to add all the datas to the same cerebro instance as you are doing currently. However, only a single strategy needs to be added to the cerebro instance. In addition the cerebro instance should be run only once as well - and not under for loop.
I am keen to see responses on this too.
For Stop Losses you can place these with your broker at the time of placing the order. The SL will be hit and executed by the broker regardless of Backtrader timeframes.
I typically work on 1min bars in the current market however fast price movements sometimes mean I am not getting the best close position timing when the exit condition is hit.
The only solution I've been able to think of (but not yet tried) is to set up 'data' and 'data1' on two different timeframes. 'data' would be for say 5 sec bar (for closing positions) and 'data1' would be for the entry conditions on a higher timeframe (i.e. 1min).
To those that may come across this thread, I cannot explain why the issue is occurring, but if you pass a name to the default data when utilizing 'adddata', the legend will look normal again. Fixed
#Don't use this:
#Use a name with the data:
The name comes across in the heading / legend of the master Bokeh plot as so:
when using multiple data sources, you will get notified of every change. when resampling, if data advances, on replay if data changes (in that case, data will not necessary advance). So to know if the data source advanced, you need to check the length of data.
if self._last_len > len(self.datas):
self._last_len = len(self.datas