1.9.60.122
- Remove unused files
- README update, Docstring corrections, documentation corrections
- Update travis settings
Available via pip
1.9.60.122
Available via pip
Thank you for the message. All options are being considered (messages from other threads were also conveyed)
1.9.66.122
Because the analyzer has not been able to calculate any SharpeRatio
If you only have 1 year of data and taking into account that the default timeframe for the calculation is years
, no calculation can take place. Sharpe needs at least 2 samples for the given timeframe, to calculate the variance.
Rather than using AnnualReturn
, consider using TimeReturn
and specify the actual target timeframe.
Let's try to summarize:
Non-Live Data feeds have well defined interfaces and it is documented
You only have to override _load(self)
, which will be in charge of loading the values into the lines of the data series (in most cases these will be: datetime
, open
, high
, low
, close
, volume
and for futures openinterest
It returns True
if it has been able to fill the next set of values or False
to indicate the end of the data stream.
What's left for the implementer: the decision as to how _load(self)
receives the values from the feed. Examples:
From a file: for each call to _load
you can simply read the next line, process the data, fill the lines and return, until EOF
is met.
There is even documentation of how to do it for a binary file Docs - Binary Datafeed Development
This has been for example generalized for CSV
-based sources by adding a _loadline
method, which receives the line broken down into tokens. In this case only overriding of _loadline
is needed. See Docs - CSV Data Feed Development
Live Data Feeds could have a couple of things added
Methods islive
and tzoffset
, the chosen pattern for data reception and the extra return value for _load
which is None
to indicate that the data feed has currently nothing to deliver but could have it later (the data stream is active but has not come to an end)
The problem with the rest ... it is provider dependent. And the hackhaton (aka BacktraderCon 2017) last weekend has proven it. The initial implementation the guys had followed the guidelines from the Oanda implementation, but because the provider (Kraken) only offers polling and has low limits for its Rate Limiting policy, everything is a historical download at the end of the day. Suddenly instead of 2 queues, both queues are the same but the usage is different.
Broker have a well defined interface
Here it is really a matter of work in the store, which offers a private interface to the broker. Through this interface for example the store will convert a order from backtrader to an order which is palatable by the real broker. On the way back, broker notifications have to be adapted, to change the status of orders appropriately.
A kind of paper to give guidelines can be considered (will be done so to say), but at the end of the day and the guys from the BacktraderCon could tell a lot about it, it's about the very small details from each broker.
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done]
Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered)
Timezone considerations
tz
bounded like the data feedspytz
doesn't have a notion of local timezone
. This poses a small challenged when it comes to accept times which are referring to the actual local time (it may not seem so in real time, but it is in backtesting, in which the datetime reference is marked by the data feeds being backtested)The best possible place to add the scheduling seems to be in the strategy's __init__
method and as such should be a method of Strategy
This doesn't preclude having a global scheduling which could be entered in cerebro
to control any strategy.
Collection of numpy/pandas/statsmodel dependent indicators [Done]
The discussions here have given birth to some.
So far none has been added to the package for a good reason: backtrader was meant to be a pure Python package, in the sense that it wouldn't need any external dependencies beyond a regular Python distribution.
(With the exception of matplotlib
if plotting was wished)
But at some point in time and even it not based on pandas
or numpy
arrays, those indicators should make it into the main distribution
Reverting Resampling/Replaying to the original model [Discarded]
Maybe
v2
Once the above items are done ... v2 can be kickstarted to try to
@tw00000 said in Datetime format internally?:
Using: [bt.utils.date.num2date(date) for date in self.datas[0].datetime.get(size=150)]
In place of: self.datas[0].datetime.get(size=150)
This is partially correct. If you are working with timezones, that will only give you UTC
time. The correct form would be:
current_datetime = self.data.num2date()
For those 150
objects
those_150_datetimes = [self.data.num2date(x) for x in self.data.datetime.get(size=150)]
The event is over ...
and we even ...
A repository with the snippets:
There you will also find a PDF with the notes that were taking during the event (including the pictures)
And of course ... the best picture ever
The x-axis is there (luckily, because if not everything would be displayed on a singularity rim and we should fear a potential earth implosion into such an event) but the x-ticks (i.e: timestamps in this case) are not displayed in the latest latest version of matplotlib
(currently 3.1.1
)
The previous stable version 3.0.3
and the last LTS version 2.2.4
do display the x-ticks.
pip install --force matplotlib==YOUR-PREFERRED-WORKING-VERSION
Some of the contributions which were in the queue would for sure be nice additions for some users of `backtrader´. In order to allow for a seamless integration of those for as many people as possible, the following approach will be taken
A side project named backtrader_contrib
will be created
This project will accept pull requests
Elements submitted (indicators, analyzers, ...) will be injected into the main backtrader package
The licensing is left open to the user with one exception: no GPLv2. Being the reason that some licenses like Apache License 2.0 is not compatible with GPLv2, but it is with GPLv3
See: GNU - A Quick Guide to GPLv3 and GNU - Various Licenses and Comments about Them
The repository will contain the license text for GPLv3, ASF 2.0, MIT, BSD
The repository will be published as a pip
package on a regular basis
1.9.75.123:
Dear @vladisld,
I have just read this tread and you statement
@vladisld said in Backtrader's Future:
Since no one I guess has a full understanding of the platform yet, we probably should limit ourselves to bug fixing only, at least initially - and see how it goes.
This is probably spot on and something I have failed to realize. Since your fork doesn't yet contain any changes, I am adding a few bug fixes to the main repository and release a new version with them.
@xyshell said in cerebro.resample() introduces data in the future:
Note: 6219.15 is the close price of the 1h data at 2020-03-20 01:00:00, while 6219.15 is also the open price of the 1min data
No. There is no such thing as the closing price of the 1-hour
data, because that data doesn't exist. Naming things properly does help.
Additionally the information you provide is wrong, which is confirmed by looking at your data.
Note for the other readers: for whatever the reason, this data defies all established standards and has the following format: CHLOVTimestamp
Your data indicates that
6219.15
is the closing price of the 1-min
bar at 00:59:00
, hence the last bar to go into the resampling for 1-hour
between 00:00:00
and 00:59:00
(60 bars -if all are present-) which is first delivered to you as a resampled bar at 01:00:00
6219.15
is the opening price of the 1-min
bar at 01:00:00
At 01:00:00
you have two bits of information available:
1-min
bar for 01:00:00
1-hour
resampled data for the period 00:00:00
to 00:59:00
As expected.
@benjomeyer said in Timers:
timername='selltimer',
Given how you pass it ... you could probably try this better
def notify_timer(self, timer, when, timername):
if timername=='buytimer':
self.buy_stocks()
elif timername=='selltimer':
self.sell_stocks()
given that you have no additional *args
or **kwargs
. But a more general approach I would suggest this
def notify_timer(self, timer, when, **kwargs):
timername = kwargs.get('timername', None)
if timername=='buytimer':
self.buy_stocks()
elif timername=='selltimer':
self.sell_stocks()
where you can also decide if a default logic for None
is needed.
Note: notice the usage of elif
@Kevin-Fu said in Building Sentiment Indicator class: TypeError: must be real number, not LineBuffer:
def next(self): self.date = self.data.datetime date = bt.num2date(self.date[0]).date() prev_sentiment = self.sentiment if date in date_sentiment: self.sentiment = date_sentiment[date] self.lines.sentiment[0] = self.sentiment
In any case that's probably where the error happens. There is no definition of self.sentiment
and the indicator understands you are looking for the line named sentiment
. The problems
@Kevin-Fu said in Building Sentiment Indicator class: TypeError: must be real number, not LineBuffer:
Any input would be greatly appreciated
But you provide no input with regards to the error. Only
TypeError: must be real number, not LineBuffer
Python exceptions provided a stacktrace which points to the different line numbers in the stack ... allowing to trace the error to the origin point (and showing any potential intermediate conflict)
The error is only telling us that you are passing an object (a LineBuffer
) there where a float
should have been passed by you. But not where you are doing it ... which is in the stacktrace ...
@Kevin-Galkov said in DataCls does autoregister:
If I run the mementum/backtrader/samples/oandatest/oandatest.py , it complains:
In any, it does complay because you don't have the proper package installed, you have something for v20
.
You have to use this:
The built-in module is only compatible with the old Oanda
API.
And oandatest.py
works with the old built-in module.
Timers
go to notify_timer
which receives the timer
, when
it is happening and any extra *args
and **kwargs
you may created the timer with. You can use any of the latter to differentiate timers.
Or you can simply use the timer id
to associate different logic to each timer.