How does backtrader scale?
I'm working on a Quantopian-like platform and Backtrader seems to be a top alternative to Zipline. The main concern I have is how would it handle multiple users running backtests against Quantopian levels of data (estimated at 1 to 4TB)? There are 2 areas of concern, one being loading all that data (for Zipline it seems it has to be done every run), and the other being the CPU load.
Has nonody here used Backtrader with large datasets? What size of data are people working with, and how well does Backtrader handle the biggest you've used?
Have you looked through the documentation? Backtrader has numerous options to tweak how the data is loaded and when indicators are to be calculated. By default all data is pre-loaded into memory but this can easily be changed by adding
preload=Falseargument to Cerebro (https://www.backtrader.com/docu/cerebro.html).
Additional ideas here:
Have you found any good sources for Quantopian like data? It's very difficult (expensive) to acquire good data and Quantopian always kept the data locked up in their platform.
It's very difficult (expensive) to acquire good data
Do you know any other frameworks that make it more easy?
My understanding so far was that getting data can be as easy as downloading it from yahoo/google/quandl and it doesn't very much depend on the used backtesting framework.
My interests are primarily intraday (1-minute) FX and futures. Several years is typically available but much of the free FX data is very low quality. I would like to get historical data that includes depth of market and order histories but this data seems to be marketed towards hedge funds with deep pockets. Everything I found was on the order of $50k+. Please share if you know of any reasonable sources!
May I ask what is the platform are you working on? QuantRocket?