Optimization leads to worse results than random picking of parameters

I am trying to optimize my algorithm. But after optimizing, I ended up with worse outcome than randomly selecting some parameters.
So, I thought that I should have done something wrong in selecting the optimal value from the results of cerebro.run() but then I checked the values and the code is working exactly the way I want it to.
I am trying to optimize based on return/Drawdown ratio and select the parameters which have the highest value of this ratio among all the optimization runs. But when I checked I found that it was a global maxima =11.0 and around it were values like 1.1, 0.2, 1.4, 6.5(this was just before 11.0) and mostly values around 1. So, I am thinking that this is not actual the optimal value; just a fluke of numbers.
So, my question is, how do I find optimal values among these hundreds of ratios? 
Export your backtesting data to excel and analyze if over there
Having all results lined up makes it plain as day if a parameter combination is the result of dumb luck or alpha

@hghhgghdfdfdf I understand that this is a possibility but I have automated the optimization process and can't manually select the optimized parameters. There must be some logic to select the optimized parameters or otherwise I won't be able to code it.

You could for each variable, average the return for each parameter and then select the best performing parameter out of that selection
e.g.:
Alternatively, you could include the neighboring parameters as part of the selection process. (fitness value of period=10 is the average of the fitness values of period 812)
Many possible solutions.