1.9.60.122
- Remove unused files
- README update, Docstring corrections, documentation corrections
- Update travis settings
Available via pip
1.9.60.122
Available via pip
Thank you for the message. All options are being considered (messages from other threads were also conveyed)
1.9.66.122
The x-axis is there (luckily, because if not everything would be displayed on a singularity rim and we should fear a potential earth implosion into such an event) but the x-ticks (i.e: timestamps in this case) are not displayed in the latest latest version of matplotlib
(currently 3.1.1
)
The previous stable version 3.0.3
and the last LTS version 2.2.4
do display the x-ticks.
pip install --force matplotlib==YOUR-PREFERRED-WORKING-VERSION
Let's try to summarize:
Non-Live Data feeds have well defined interfaces and it is documented
You only have to override _load(self)
, which will be in charge of loading the values into the lines of the data series (in most cases these will be: datetime
, open
, high
, low
, close
, volume
and for futures openinterest
It returns True
if it has been able to fill the next set of values or False
to indicate the end of the data stream.
What's left for the implementer: the decision as to how _load(self)
receives the values from the feed. Examples:
From a file: for each call to _load
you can simply read the next line, process the data, fill the lines and return, until EOF
is met.
There is even documentation of how to do it for a binary file Docs - Binary Datafeed Development
This has been for example generalized for CSV
-based sources by adding a _loadline
method, which receives the line broken down into tokens. In this case only overriding of _loadline
is needed. See Docs - CSV Data Feed Development
Live Data Feeds could have a couple of things added
Methods islive
and tzoffset
, the chosen pattern for data reception and the extra return value for _load
which is None
to indicate that the data feed has currently nothing to deliver but could have it later (the data stream is active but has not come to an end)
The problem with the rest ... it is provider dependent. And the hackhaton (aka BacktraderCon 2017) last weekend has proven it. The initial implementation the guys had followed the guidelines from the Oanda implementation, but because the provider (Kraken) only offers polling and has low limits for its Rate Limiting policy, everything is a historical download at the end of the day. Suddenly instead of 2 queues, both queues are the same but the usage is different.
Broker have a well defined interface
Here it is really a matter of work in the store, which offers a private interface to the broker. Through this interface for example the store will convert a order from backtrader to an order which is palatable by the real broker. On the way back, broker notifications have to be adapted, to change the status of orders appropriately.
A kind of paper to give guidelines can be considered (will be done so to say), but at the end of the day and the guys from the BacktraderCon could tell a lot about it, it's about the very small details from each broker.
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done]
Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered)
Timezone considerations
tz
bounded like the data feedspytz
doesn't have a notion of local timezone
. This poses a small challenged when it comes to accept times which are referring to the actual local time (it may not seem so in real time, but it is in backtesting, in which the datetime reference is marked by the data feeds being backtested)The best possible place to add the scheduling seems to be in the strategy's __init__
method and as such should be a method of Strategy
This doesn't preclude having a global scheduling which could be entered in cerebro
to control any strategy.
Collection of numpy/pandas/statsmodel dependent indicators [Done]
The discussions here have given birth to some.
So far none has been added to the package for a good reason: backtrader was meant to be a pure Python package, in the sense that it wouldn't need any external dependencies beyond a regular Python distribution.
(With the exception of matplotlib
if plotting was wished)
But at some point in time and even it not based on pandas
or numpy
arrays, those indicators should make it into the main distribution
Reverting Resampling/Replaying to the original model [Discarded]
Maybe
v2
Once the above items are done ... v2 can be kickstarted to try to
Posts have been cleared and a javascript error which was notified has been hopefully resolved.
The event is over ...
and we even ...
A repository with the snippets:
There you will also find a PDF with the notes that were taking during the event (including the pictures)
And of course ... the best picture ever
This is a very recent error introduced with this commit: https://github.com/backtrader/backtrader/commit/8f537a1c2c271eb5cfc592b373697732597d26d6
In attempting to fix the bool
problem if you only had 1 trade the proper not
to identify lost trades was lost.
This is now fixed in this push to the development
branch: https://github.com/backtrader/backtrader/commit/cc2751a5f53166f68c5340eb876579f1a5590bf5
Some of the contributions which were in the queue would for sure be nice additions for some users of `backtrader´. In order to allow for a seamless integration of those for as many people as possible, the following approach will be taken
A side project named backtrader_contrib
will be created
This project will accept pull requests
Elements submitted (indicators, analyzers, ...) will be injected into the main backtrader package
The licensing is left open to the user with one exception: no GPLv2. Being the reason that some licenses like Apache License 2.0 is not compatible with GPLv2, but it is with GPLv3
See: GNU - A Quick Guide to GPLv3 and GNU - Various Licenses and Comments about Them
The repository will contain the license text for GPLv3, ASF 2.0, MIT, BSD
The repository will be published as a pip
package on a regular basis
Because the analyzer has not been able to calculate any SharpeRatio
If you only have 1 year of data and taking into account that the default timeframe for the calculation is years
, no calculation can take place. Sharpe needs at least 2 samples for the given timeframe, to calculate the variance.
Rather than using AnnualReturn
, consider using TimeReturn
and specify the actual target timeframe.
@usmacscientist said in Zigzag indicator:
Is there a way to plot the last line (between last validated turning point and last price)? I can't figure out how to set the value of the indictor on the last bar to the last price.
There is no line because the indicator sets no value. And that's because the indicator has no way of knowing that the data feed has finished.
Indicators are meant to be as stupid as possible and simply perform an operation (and do so in an idempotent manner). This is the reason, for example, why indicators carry no datetime
payload.
A potential trick: set always zigzag
to the lastprice and invalidate it in the next cycle if not valid. At the end of the stream there will be no next chance to invalidate it.
1.9.62.122
A project is being considered now, that would imply adding support. To be decided in the next days
Indicators can be written out automatically (to the destination of your choice) with a Writer
. See
And to automatically add a writer to cerebro
which writes to standard output. See
Use cerebro = Cerebro(writer=True)
or cerebro.run(writer=True)
Indicators are by default not added to the output of writers, you need to enable it. For example
def __init__(self):
mysma = bt.indicators.SMA(period=15)
mysma.csv = True
As for writing the values to a DataFrame
, you may pass a DataFrame
as a named argument to the indicator and add the values, but taking into account that appending values to a DataFrame
is a very expensive operation, you may prefer to do it at once during Strategy.stop
def stop(self):
myvalues = self.mysma.sma.get(size=len(self.mysma))
which you can easily put into a DataFrame
A couple of weeks later after we kick-started the idea of Algotopian and after having evaluated the interest from users, in the form of registrations, investment interest and team participation, we have made
a final decision with regards to the project.
Although the number of registered people exceeds our expectation, this number doesn't match the bare minimum needed with regards to investment.
It wasn't our intention to burn the names of agreed team members, advisors and an additional founder, because as we are doing, we wanted to be able to quietly cancel the project, without also having engaged into any kind
of funds collection.
We'll probably recycle the name Algotopian in connection to the further development of backtrader and potential services around it. But at the moment, the project is officially cancelled.
Thank you for your interest and best regards.
Use the length of the data feed: len(data1)
. It will only change when new data is available.
Collect trades during notify_trade
and add them to your dataframe.
1.9.67.122
@tw00000 said in Datetime format internally?:
Using: [bt.utils.date.num2date(date) for date in self.datas[0].datetime.get(size=150)]
In place of: self.datas[0].datetime.get(size=150)
This is partially correct. If you are working with timezones, that will only give you UTC
time. The correct form would be:
current_datetime = self.data.num2date()
For those 150
objects
those_150_datetimes = [self.data.num2date(x) for x in self.data.datetime.get(size=150)]