Firstly: "lines objects from operations DO NOT GET plotted (like close_over_sma = self.data.close > self.sma)"
Second: "There is an auxiliary LinePlotterIndicator which plots such operations if wished with the following approach:"
And the example:
close_over_sma = self.data.close > self.sma
I'm live trading multi-data multi-strategy setup using IBBroker. A simple way of implementing it is to just add multiple data feeds and multiple strategies to the same cerebro instance and have a way to associate each strategy with the appropriate data feed. One way of doing this is to pass the appropriate data index to the strategy as a parameter.
There were some IB specific problems though that needed to be addressed (see my previous post on this https://community.backtrader.com/topic/2122/live-trading-multiple-strategies-each-one-on-a-separate-data-feed) - don't know if Oanda broker have similar problems - but otherwise it is working pretty well.
Anybody has any custom indicator examples which includes numpy calculations? I would be appretiated if you share so i can figure out how to convert the code below.
def NPALMA(pnp_array, a) :
length = a
# just some number (6.0 is useful)
sigma = 6
# sensisitivity (close to 1) or smoothness (close to 0)
offset = 0.85
asize = length - 1
m = offset * asize
s = length / sigma
dss = 2 * s * s
alma = np.zeros(pnp_array.shape)
wtd_sum = np.zeros(pnp_array.shape)
for l in range(len(pnp_array)):
if l >= asize:
for i in range(length):
im = i - m
wtd = np.exp( -(im * im) / dss)
alma[l] += pnp_array[l - length + i] * wtd
wtd_sum[l] += wtd
alma[l] = alma[l] / wtd_sum[l]
Would you at all know how to apply it such that it full commission is taken at only the entrance/exit of a position and not both? I have trouble with floating balances when trading pairs that don't contain the account currency.
It seems the gradual memory consumption increase in each worker process could indeed be explained by not timely release of the serialized Cerebro object passed to the worker thread.
Using Pool.imap chunksize=1 illustrate this ( specifying chunksize causes the Pool to use each worker process only for chunksize work items)
This reduces the memory consumption even further, and in my case does not result in any significant performance degradation (since each work item takes much more time compared against the time it takes to create a new worker process), as could be clearly visible in the above graph.
@Brandon-La-Porte It was indeed something easy, wrong indentation level.
for i, data in enumerate(datas):
if i != 0:
data.plotinfo.plotmaster = datas
This was the wrong indentation and ended up being ran more the once.