Genetic Optimization
-
Does anyone know how to make the "reward" into number of profitable trades? So basically instead of like total profit, or sharpe ratio or consistency, I want to make it so that it a) is profitable (so if you lose money over the whole backtest it doesn't count), and b) has a good ratio of profitable trades to losing trades.
I'm basically working on a project for research where I make a trading system which just sort of makes a little profit here and there, rather than trading constantly.
-
@d416 How can I go about changing the performance measure? I'm going to start digging for it, but just thought I'd ask to maybe save some time :)
-
@Wayne-Filkins-0 pass whatever you want to optimize to the maximize function. So for your case, you could multiply a binary value (profitable/unprofitable) by the percentage of profitable trades and pass that to the gen-opt function
-
@Wayne-Filkins-0 Agree with @hghhgghdf-dfdf
Key part of the above code would be here:return cerebro.broker.getvalue()
This is a super simple method of using cash in the account as a measure of performance, but the true BT way would be use Analyzers https://www.backtrader.com/docu/analyzers/analyzers/
-D
-
@d416 In your optunity script, if you want to change the parameters to decimal numbers like 0.1 - 2.6 or something, do you just type them as decimal range and it just knows to search all decimals? Or do you have to do something else?
-
somethig like this..
import numpy as np param=np.arange(0.1, 0.9, 0.05)
-
Recently I've been experimenting with Gradient-Free-Optimizers
a Python optimization library which is well documented and relatively easy to implement.
The library includes multiple optimization techniques including Particle Swarm and Genetic (Evolution). There is also a handy graphic to advise on which optimization technique to use, based on the convexity of your strategy (function)
Here is my very basic code example using the Simple Moving Average strategy using 1 year of TSLA daily data from Yahoo
# https://github.com/SimonBlanke/Gradient-Free-Optimizers import numpy as np from gradient_free_optimizers import EvolutionStrategyOptimizer import datetime # import dateutil.parser # import pytz, tzlocal import backtrader as bt import backtrader.indicators as btind import backtrader.feeds as btfeeds class MA_CrossOver(bt.Strategy): #This is a long-only strategy which operates on a moving average cross alias = ('SMA_CrossOver',) params = ( # period for the fast Moving Average ('fast', 10), # period for the slow moving average ('slow', 30), # moving average to use ('_movav', btind.MovAv.SMA) ) def __init__(self): sma_fast = self.p._movav(period=self.p.fast) sma_slow = self.p._movav(period=self.p.slow) self.buysig = btind.CrossOver(sma_fast, sma_slow) def next(self): if self.position.size: if self.buysig < 0: self.sell() elif self.buysig > 0: self.buy() def runstrat(para): # smacrossover cerebro_opt = bt.Cerebro(runonce=True, optdatas=True) cerebro_opt.adddata(data) cerebro_opt.addstrategy(MA_CrossOver ,fast = para["fast"], slow = para["slow"]) cerebro_opt.run() return cerebro_opt.broker.getvalue() #--- end runstrat --- # Add the feed fromdate = datetime.datetime.strptime('2020-06-01', '%Y-%m-%d') todate = datetime.datetime.strptime('2021-06-02', '%Y-%m-%d') data = btfeeds.YahooFinanceData( dataname='TSLA', fromdate=fromdate, todate=todate) # -- smacrossover search space --- search_space = { "fast": np.arange(5, 200, 1), "slow": np.arange(5, 200, 1), } iterations = 1000 opt = EvolutionStrategyOptimizer(search_space) opt.search(runstrat, n_iter=iterations) # repo says > 10000 but that's looong best_param_fast = opt.best_para['fast'] best_param_slow = opt.best_para['slow'] print('best_param_fast: ' + str(best_param_fast)) print('best_param_slow: ' + str(best_param_slow))
At the end of the optimization run, GFO will present statistics. The above example took 17 minutes to run, most of that time spent on 'evaluation' which is running the strategy.
If there is a way to make the strategy more efficient that would certainly reduce the time. Reading the data from Yahoo online would help - this strategy takes 30% less time to run when reading locally.
The 'Iterations' is a setting of GFO - the above example uses 1,000 iterations, but their example on github uses 10,000 iterations, but that function is a simple math function and not a trading strategy.
In this example, the function we're optimizing - 'runstrat' - returns a simple net portfolio value at the end of the run. This is what GFO uses to evaluate the run and come up with the best parameters. Analyzers could be used here to return sharpe ratio, profit factor, MAE/MFE, or a combination.I hope this helps anyone who codes in this space.
If anyone has any ideas to improve the optimization speed that would be amazing -
@d416 Thank you for point to the Gradient-Free-Optimizers. They are amazing and do optimization super fast!
In my tests they are faster 40x times than build-in brute force optimization algo.I have prepared optimization statistics for different optimizers from Gradient-Free-Optimizers.
I did my tests on the following simple strategy (just for proof):class SmaCross(bt.SignalStrategy): params = ( ('fast', 10), ('slow', 30), ) def __init__(self): sma1, sma2 = bt.ind.SMA(period=self.p.fast), bt.ind.SMA(period=self.p.slow) crossover = bt.ind.CrossOver(sma1, sma2) self.signal_add(bt.SIGNAL_LONG, crossover)
And the test parameters were:
"fast": np.arange(5, 150, 2), "slow": np.arange(50, 150, 2)
The most interesting is the results for optimization times and scores:
Build-in brute force: score:10035.94200515747, time:207.48 s, para: {"fast":57, "slow":56} HillClimbingOptimizer: score:10023.95000076294, time:7.72 s, para:{'fast': 129, 'slow': 54} RepulsingHillClimbingOptimizer: score:10023.95000076294, time:15.99 s, para:{'fast': 119, 'slow': 60} SimulatedAnnealingOptimizer: score:10023.95000076294, time:7.24 s, para:{'fast': 131, 'slow': 52} RandomSearchOptimizer: score:10026.352001190186, time:20.60 s, para:{'fast': 61, 'slow': 56} RandomRestartHillClimbingOptimizer: score:10023.95000076294, time:10.14 s, para:{'fast': 127, 'slow': 52} RandomAnnealingOptimizer: score:10023.95000076294, time:8.63 s, para:{'fast': 125, 'slow': 56} ParallelTemperingOptimizer: score:10021.9880027771, time:15.08 s, para:{'fast': 147, 'slow': 50} ParticleSwarmOptimizer: score:10030.186000823975, time:15.71 s, para:{'fast': 61, 'slow': 54} EvolutionStrategyOptimizer: score:10023.95000076294, time:14.98 s, para:{'fast': 131, 'slow': 52} DecisionTreeOptimizer: score:10035.94200515747, time:5.05 s, para:{'fast': 57, 'slow': 56}
To summarize:
- Build-in brute force: score:10035.94200515747, time:207.48 s
- DecisionTreeOptimizer: score:10035.94200515747, time:5.05 s
DecisionTreeOptimizer was 40x times faster!
You could check my calculations on github.