1.9.60.122
- Remove unused files
- README update, Docstring corrections, documentation corrections
- Update travis settings
Available via pip
1.9.60.122
Available via pip
Thank you for the message. All options are being considered (messages from other threads were also conveyed)
1.9.66.122
The x-axis is there (luckily, because if not everything would be displayed on a singularity rim and we should fear a potential earth implosion into such an event) but the x-ticks (i.e: timestamps in this case) are not displayed in the latest latest version of matplotlib
(currently 3.1.1
)
The previous stable version 3.0.3
and the last LTS version 2.2.4
do display the x-ticks.
pip install --force matplotlib==YOUR-PREFERRED-WORKING-VERSION
Let's try to summarize:
Non-Live Data feeds have well defined interfaces and it is documented
You only have to override _load(self)
, which will be in charge of loading the values into the lines of the data series (in most cases these will be: datetime
, open
, high
, low
, close
, volume
and for futures openinterest
It returns True
if it has been able to fill the next set of values or False
to indicate the end of the data stream.
What's left for the implementer: the decision as to how _load(self)
receives the values from the feed. Examples:
From a file: for each call to _load
you can simply read the next line, process the data, fill the lines and return, until EOF
is met.
There is even documentation of how to do it for a binary file Docs - Binary Datafeed Development
This has been for example generalized for CSV
-based sources by adding a _loadline
method, which receives the line broken down into tokens. In this case only overriding of _loadline
is needed. See Docs - CSV Data Feed Development
Live Data Feeds could have a couple of things added
Methods islive
and tzoffset
, the chosen pattern for data reception and the extra return value for _load
which is None
to indicate that the data feed has currently nothing to deliver but could have it later (the data stream is active but has not come to an end)
The problem with the rest ... it is provider dependent. And the hackhaton (aka BacktraderCon 2017) last weekend has proven it. The initial implementation the guys had followed the guidelines from the Oanda implementation, but because the provider (Kraken) only offers polling and has low limits for its Rate Limiting policy, everything is a historical download at the end of the day. Suddenly instead of 2 queues, both queues are the same but the usage is different.
Broker have a well defined interface
Here it is really a matter of work in the store, which offers a private interface to the broker. Through this interface for example the store will convert a order from backtrader to an order which is palatable by the real broker. On the way back, broker notifications have to be adapted, to change the status of orders appropriately.
A kind of paper to give guidelines can be considered (will be done so to say), but at the end of the day and the guys from the BacktraderCon could tell a lot about it, it's about the very small details from each broker.
A summary of what's on the table:
Time Scheduled Functions/Callbacks [Done]
Functions will be called at specific times (periodical calling like ... "every 60 minutes starting at 08:30" may be considered)
Timezone considerations
tz
bounded like the data feedspytz
doesn't have a notion of local timezone
. This poses a small challenged when it comes to accept times which are referring to the actual local time (it may not seem so in real time, but it is in backtesting, in which the datetime reference is marked by the data feeds being backtested)The best possible place to add the scheduling seems to be in the strategy's __init__
method and as such should be a method of Strategy
This doesn't preclude having a global scheduling which could be entered in cerebro
to control any strategy.
Collection of numpy/pandas/statsmodel dependent indicators [Done]
The discussions here have given birth to some.
So far none has been added to the package for a good reason: backtrader was meant to be a pure Python package, in the sense that it wouldn't need any external dependencies beyond a regular Python distribution.
(With the exception of matplotlib
if plotting was wished)
But at some point in time and even it not based on pandas
or numpy
arrays, those indicators should make it into the main distribution
Reverting Resampling/Replaying to the original model [Discarded]
Maybe
v2
Once the above items are done ... v2 can be kickstarted to try to
This is a very recent error introduced with this commit: https://github.com/backtrader/backtrader/commit/8f537a1c2c271eb5cfc592b373697732597d26d6
In attempting to fix the bool
problem if you only had 1 trade the proper not
to identify lost trades was lost.
This is now fixed in this push to the development
branch: https://github.com/backtrader/backtrader/commit/cc2751a5f53166f68c5340eb876579f1a5590bf5
The event is over ...
and we even ...
A repository with the snippets:
There you will also find a PDF with the notes that were taking during the event (including the pictures)
And of course ... the best picture ever
Some of the contributions which were in the queue would for sure be nice additions for some users of `backtrader´. In order to allow for a seamless integration of those for as many people as possible, the following approach will be taken
A side project named backtrader_contrib
will be created
This project will accept pull requests
Elements submitted (indicators, analyzers, ...) will be injected into the main backtrader package
The licensing is left open to the user with one exception: no GPLv2. Being the reason that some licenses like Apache License 2.0 is not compatible with GPLv2, but it is with GPLv3
See: GNU - A Quick Guide to GPLv3 and GNU - Various Licenses and Comments about Them
The repository will contain the license text for GPLv3, ASF 2.0, MIT, BSD
The repository will be published as a pip
package on a regular basis
You read it right and has been corrected. The platform was tightened around that area due to how users were actually addressing the multiple possible corners.
Trade
documentation is here: [Docs - Trade][https://backtrader.com/docu/trade/]
Member Attributes:
ref
: unique trade identifier
status
(int
) : one of Created, Open, Closed
tradeid
: grouping tradeid passed to orders during creation
The default in orders is 0
size
(int
) : current size of the trade
price
(float
) : current price of the trade
value
(float
) : current value of the trade
commission
(float
) : current accumulated commission
pnl
(float
) : current profit and loss of the trade (gross pnl)
pnlcomm
(float
) : current profit and loss of the trade minus
commission (net pnl)
isclosed
(bool
) : records if the last update closed (set size to
null the trade
isopen
(bool
) : records if any update has opened the trade
justopened
(bool
): if the trade was just opened
baropen
(int
): bar in which this trade was opened
dtopen
(float
): float coded datetime in which the trade was
opened
open_datetime
to get a Python datetime.datetimenum2date
methodbarclose
(int
) : bar in which this trade was closed
dtclose
(float
) : float coded datetime in which the trade was
closed
close_datetime
to get a Python datetime.datetimenum2date
methodbarlen
(int
) : number of bars this trade was open
historyon
(bool
) : whether history has to be recorded
history
(list
) : holds a list updated with each “update” event
containing the resulting status and parameters used in the update
1.9.77.123:
1.9.76.123:
writer.py
after 1.9.75.123 pulltrade.py
upgraded to be able to be unpickled. (#406)There is a post which shows monthly rebalancing.
cerebro
and adding each Strategy
Given that the gregorian calendar has months of different length, the best way is to use the first day seen of a month (which must not be 1
if it's a weekend or a bank holiday) and execute the rebalancing.
You simply need to keep a sentinel value keeping the last month (or None
at the start) in which rebalancing was done and act when the month changes
Because you have 3 different dates and they are set 1 month apart, this is actually granted, because you won't receive data in your next
until the 3 strategies deliver data.
The 1st time you will see data is on 2013-03-01 and you can use 3
as the sentinel for the last execution
When the date changes to April and with it to 4
, you may execute the 1st rebalancing.
See, I do understand that you have a problem but,
To start with so that you may grasp the difficulties of attempting something:
res
?In any case I do stand by my early diagnostic. The data feed has not been started, because assuming I believe the return of res
is a dataframe, that's not how to add it to cerebro
See the documentation for PandasData
: https://www.backtrader.com/docu/pandas-datafeed/pandas-datafeed/
Pull-request has been merged. Version 1.9.77.123
has been generated
Quite difficult to reproduce anything from a picture. We don't even know how that's loaded, hence the comment before about a minimum reproducible sample.
It's going to be looked upon, this week (CW15 2023) unless the sky happens to fall on our heads.