- Remove unused files
- README update, Docstring corrections, documentation corrections
- Update travis settings
Available via pip
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done] Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered) Timezone considerations
tzbounded like the data feeds
pytzdoesn't have a notion of
local timezone. This poses a small challenged when it comes to accept times which are referring to the actual local time (it may not seem so in real time, but it is in backtesting, in which the datetime reference is marked by the data feeds being backtested)
The best possible place to add the scheduling seems to be in the strategy's
__init__ method and as such should be a method of
This doesn't preclude having a global scheduling which could be entered in
cerebro to control any strategy.
Collection of numpy/pandas/statsmodel dependent indicators [Done] The discussions here have given birth to some. So far none has been added to the package for a good reason: backtrader was meant to be a pure Python package, in the sense that it wouldn't need any external dependencies beyond a regular Python distribution. (With the exception of
matplotlib if plotting was wished)
But at some point in time and even it not based on
numpy arrays, those indicators should make it into the main distribution
Reverting Resampling/Replaying to the original model [Discarded]
Once the above items are done ... v2 can be kickstarted to try to
The event is over ...
and we even ...
A repository with the snippets:
There you will also find a PDF with the notes that were taking during the event (including the pictures)
And of course ... the best picture ever
Some of the contributions which were in the queue would for sure be nice additions for some users of `backtrader´. In order to allow for a seamless integration of those for as many people as possible, the following approach will be taken
A side project named
backtrader_contrib will be created
This project will accept pull requests
Elements submitted (indicators, analyzers, ...) will be injected into the main backtrader package
The licensing is left open to the user with one exception: no GPLv2. Being the reason that some licenses like Apache License 2.0 is not compatible with GPLv2, but it is with GPLv3
The repository will contain the license text for GPLv3, ASF 2.0, MIT, BSD
The repository will be published as a
pip package on a regular basis
A couple of weeks later after we kick-started the idea of Algotopian and after having evaluated the interest from users, in the form of registrations, investment interest and team participation, we have made
a final decision with regards to the project.
Although the number of registered people exceeds our expectation, this number doesn't match the bare minimum needed with regards to investment.
It wasn't our intention to burn the names of agreed team members, advisors and an additional founder, because as we are doing, we wanted to be able to quietly cancel the project, without also having engaged into any kind
of funds collection.
We'll probably recycle the name Algotopian in connection to the further development of backtrader and potential services around it. But at the moment, the project is officially cancelled.
Thank you for your interest and best regards.
This is a very recent error introduced with this commit: https://github.com/backtrader/backtrader/commit/8f537a1c2c271eb5cfc592b373697732597d26d6
In attempting to fix the
bool problem if you only had 1 trade the proper
not to identify lost trades was lost.
This is now fixed in this push to the
development branch: https://github.com/backtrader/backtrader/commit/cc2751a5f53166f68c5340eb876579f1a5590bf5
It would seem that downloading from Yahoo is not full of quirks and column swaps anymore. The feed has been cleaned up, removing traces of the original API and making it clearer and with more parameters where possible. The README in the repositovy has also been updated to reflect this fact.
You increase the number of objects you create.
Why should things run at the same speed regardless of the number of objects? It would be an incredible feat.
In your example you create one hundred thousand (100000) simple moving averages. Are you expecting that nothing happens with them during program execution?
github tracker was becoming a type of bulletin board with questions and requests intermixed with any real issues.
And some users were even answering others' questions and even fulfilling some coding requests.
Hence the decision to try to unify everything under a community board which anyone can look into and where those questions, doubts and others can be better followed and managed.
It will hopefully be of help for those who like
backtrader and find it useful
sizer can be used as a portfolio manager in the sense that it can:
sizerhas access to the
strategyin which is running and the associated
brokerand with it to the universe of assets and for example net liquidation value
This means complex logic can be implemented to support the mentioned criteria: Margins, Var.
sizer cannot do:
A fully fledged portfolio manager would require development (not light for sure) and popular support (or other art of) for such a feature would be needed.
The solution above is for sure one that works. There are alternatives.
For example, Sizer` instances have a specific attribute which was thought for such thing.
strategy(See the docs: Sizers - Smart Stakng )
Cerebro instances share the same array of data feeds. With that in mind the code above could look different.
class VaRContracts(bt.Sizer): params = (('percent', 1), ('leverage', 40000), ('scaled', None), ('reverse', False), ('didx', 0), ) def __init__(self): self._data = self.strategy.datas[self.p.didx] def _getsizing(self, comminfo, cash, data, isbuy): self.pchg = self.calc_pchg(self._data.close.get(size=51))
This uses an numeric index to address the array.
Using names could make even more sense. This has the advantage that you don't need to have created the data feed before adding the sizer.
cerebro.addsizer(lsizers.VaRContracts, percent=1, leverage=args.leverage, scaled=args.scaled, dataname='SPY') cerebro.adddata(data0, name='SPY')
class VaRContracts(bt.Sizer): params = (('percent', 1), ('leverage', 40000), ('scaled', None), ('reverse', False), ('dataname', None), ) def __init__(self): self._data = self.strategy.getdatabyname(self.p.dataname) def _getsizing(self, comminfo, cash, data, isbuy): self.pchg = self.calc_pchg(self._data.close.get(size=51))