- Remove unused files
- README update, Docstring corrections, documentation corrections
- Update travis settings
Available via pip
Because the analyzer has not been able to calculate any
If you only have 1 year of data and taking into account that the default timeframe for the calculation is
years, no calculation can take place. Sharpe needs at least 2 samples for the given timeframe, to calculate the variance.
Rather than using
AnnualReturn, consider using
TimeReturn and specify the actual target timeframe.
Let's try to summarize:
Non-Live Data feeds have well defined interfaces and it is documented
You only have to override
_load(self), which will be in charge of loading the values into the lines of the data series (in most cases these will be:
volume and for futures
True if it has been able to fill the next set of values or
False to indicate the end of the data stream.
What's left for the implementer: the decision as to how
_load(self) receives the values from the feed. Examples:
From a file: for each call to
_load you can simply read the next line, process the data, fill the lines and return, until
EOF is met.
There is even documentation of how to do it for a binary file Docs - Binary Datafeed Development
This has been for example generalized for
CSV-based sources by adding a
_loadline method, which receives the line broken down into tokens. In this case only overriding of
_loadline is needed. See Docs - CSV Data Feed Development
Live Data Feeds could have a couple of things added
tzoffset, the chosen pattern for data reception and the extra return value for
_load which is
None to indicate that the data feed has currently nothing to deliver but could have it later (the data stream is active but has not come to an end)
The problem with the rest ... it is provider dependent. And the hackhaton (aka BacktraderCon 2017) last weekend has proven it. The initial implementation the guys had followed the guidelines from the Oanda implementation, but because the provider (Kraken) only offers polling and has low limits for its Rate Limiting policy, everything is a historical download at the end of the day. Suddenly instead of 2 queues, both queues are the same but the usage is different.
Broker have a well defined interface
Here it is really a matter of work in the store, which offers a private interface to the broker. Through this interface for example the store will convert a order from backtrader to an order which is palatable by the real broker. On the way back, broker notifications have to be adapted, to change the status of orders appropriately.
A kind of paper to give guidelines can be considered (will be done so to say), but at the end of the day and the guys from the BacktraderCon could tell a lot about it, it's about the very small details from each broker.
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done] Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered) Timezone considerations
tzbounded like the data feeds
pytzdoesn't have a notion of
local timezone. This poses a small challenged when it comes to accept times which are referring to the actual local time (it may not seem so in real time, but it is in backtesting, in which the datetime reference is marked by the data feeds being backtested)
The best possible place to add the scheduling seems to be in the strategy's
__init__ method and as such should be a method of
This doesn't preclude having a global scheduling which could be entered in
cerebro to control any strategy.
Collection of numpy/pandas/statsmodel dependent indicators [Done] The discussions here have given birth to some. So far none has been added to the package for a good reason: backtrader was meant to be a pure Python package, in the sense that it wouldn't need any external dependencies beyond a regular Python distribution. (With the exception of
matplotlib if plotting was wished)
But at some point in time and even it not based on
numpy arrays, those indicators should make it into the main distribution
Reverting Resampling/Replaying to the original model [Discarded]
Once the above items are done ... v2 can be kickstarted to try to
The x-axis is there (luckily, because if not everything would be displayed on a singularity rim and we should fear a potential earth implosion into such an event) but the x-ticks (i.e: timestamps in this case) are not displayed in the latest latest version of
The previous stable version
3.0.3 and the last LTS version
2.2.4 do display the x-ticks.
pip install --force matplotlib==YOUR-PREFERRED-WORKING-VERSION
The event is over ...
and we even ...
A repository with the snippets:
There you will also find a PDF with the notes that were taking during the event (including the pictures)
And of course ... the best picture ever
Some of the contributions which were in the queue would for sure be nice additions for some users of `backtrader´. In order to allow for a seamless integration of those for as many people as possible, the following approach will be taken
A side project named
backtrader_contrib will be created
This project will accept pull requests
Elements submitted (indicators, analyzers, ...) will be injected into the main backtrader package
The licensing is left open to the user with one exception: no GPLv2. Being the reason that some licenses like Apache License 2.0 is not compatible with GPLv2, but it is with GPLv3
The repository will contain the license text for GPLv3, ASF 2.0, MIT, BSD
The repository will be published as a
pip package on a regular basis
backtrader takes a dual approach to the problem. This is controlled with the
runonce (boolean) parameter to either a instantiation of
Cerebro or to
cerebro.run like in
The default is
cerebro = Cerebro(runonce=True) # or False
cerebro = Cerebro() ... ... cerebro.run(runonce=True) # or False
This could be called a pseudo-vectorized of half-vectorized approach. Built-in operations feature a
once method which calculate things in batch mode in a tight inner loop.
Data feeds are fully pre-loaded
Indicators (and sub-indicators thereof) are pre-calculated in batch-mode
Strategy instance(s) are run step-by-step
Being the goal to offer an increase in speed, but still allow for fine grained logic in the
next method of the strategy
Rough calculations indicate that it is somehow between
Drawback: Because indicators are precalculated (and therefore the buffers are preallocated), the data synchronization mechanism cannot pause the actual movement of a data feed when synchronizing the timestamps for the strategy, keeping the buffers to the final same length. This has no actual impact for backtesting but because
matplotlib expects all things to have the same
x length for plotting, it may not be possible to create a plot of the backtesting.
Drawback 2: The implementation of this mode prevented that some indicators can be fully defined in recursive terms with a single formula. A choice had to be made between having this or having the recursive formulas.
Nice Thing: If a user implements a custom
Indicator and only provides a
next method (intented for step-by-step, see below), the code automatically detects it and will still pre-calculate the indicator using the
next method instead of the missing
once method. The calculation loop will not be so tight as it could be, but users don't have to worry about implementing
This is a 100% step-by-step mode. Also named
next because only the
next method of the different indicators, strategies et al., play a role.
Everything is calculated one step at a time. The reason being the addition of data feeds which would be providing the data points one step at a time (not necessarily live feeds, it could have been reading out of a socket from a database connection).
cerebro is run with
preload=False (disable the preloading of data feeds) it will switch to this mode.
A couple of weeks later after we kick-started the idea of Algotopian and after having evaluated the interest from users, in the form of registrations, investment interest and team participation, we have made
a final decision with regards to the project.
Although the number of registered people exceeds our expectation, this number doesn't match the bare minimum needed with regards to investment.
It wasn't our intention to burn the names of agreed team members, advisors and an additional founder, because as we are doing, we wanted to be able to quietly cancel the project, without also having engaged into any kind
of funds collection.
We'll probably recycle the name Algotopian in connection to the further development of backtrader and potential services around it. But at the moment, the project is officially cancelled.
Thank you for your interest and best regards.
Using: [bt.utils.date.num2date(date) for date in self.datas.datetime.get(size=150)]
In place of: self.datas.datetime.get(size=150)
This is partially correct. If you are working with timezones, that will only give you
UTC time. The correct form would be:
current_datetime = self.data.num2date()
those_150_datetimes = [self.data.num2date(x) for x in self.data.datetime.get(size=150)]
The optimization is based on the standard
Moving that to the GPU would require for example adding
PyCuda. A quick glance at the documentation shows that code has to be written specifically for it. It is not simply meant to replace
An alternative would be to use
numba for CUDA, which would require lots of decoration with unknown results. Furthermore the
numba approach is probably
numpy array centered and as such unlikely to produce a huge benefit given the non-use of
A migration to an architecture with underlying
numpy arrays would be required. A possibility would be
dask, which follows the
pandas paradigms whilst at the same time allowing distribution and
GPU usage (by means of
Short answer: no.
Covering all non-candlestick indicators offered by
ta-lib implemented in Python and being able to work as a drop-in replacement of
ta-lib if compatibility is activated.