-
Notifications
You must be signed in to change notification settings - Fork 229
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Issues with HH #46
Comments
This is also a problem with your benchmarks. May be the other algorithms are also affected, but I am mainly interested in HHO since it seems to have the best results. Usually an optimization algorithm using boxed bounds would use these bounds to rescale HHO doesn't do this, which means that for real world problems that haven't well-scaled bounds like a) Which bounds work best? The authors of the algorithm should provide a hint. b) Testing without rescaling is not reliable. See the code below. Of course the issue would not exist, if the algorithms would rescale to the bounds they def F5(x):
dim=len(x);
x = np.asarray(x)
o=np.sum(100*(x[1:dim]-(x[0:dim-1]**2))**2+(x[0:dim-1]-1)**2);
return o;
class Scaler(object):
def __init__(self, lb, ub):
self.lb = np.asarray(lb)
self.ub = np.asarray(ub)
def scale(self, X):
X = np.asarray(X)
return self.lb + X * (self.ub - self.lb)
def unscale(self, X):
X = np.asarray(X)
return (X - self.lb) / (self.ub - self.lb)
class Scaled(object):
def __init__(self, old_fun, old_lb, old_ub, lb, ub):
self.old_sc = Scaler(old_lb, old_ub) # old scale
self.new_sc = Scaler(lb, ub) # new scale
self.old_fun = old_fun
def fun(self, X):
return self.old_fun(self.transform(X))
def transform(self, X):
return self.old_sc.scale( self.new_sc.unscale(X))
def untransform(self, X):
return self.new_sc.scale( self.old_sc.unscale(X))
class F5problem(object):
def __init__(self):
self.name = "F5"
self.fun = F5
self.lb = [-30]*30
self.ub = [30]*30
class F5problem_rescaled(object):
def __init__(self):
self.name = "F5 scaled"
self.scaled = Scaled(F5, [-30]*30, [30]*30, [100000]*30, [100001]*30)
self.fun = self.scaled.fun
self.lb = [100000]*30
self.ub = [100001]*30 |
Why do the 3D graphs and graphs executed in the code have nothing to do with the previous formula in the algorithm principle? You can even draw the graph without using the formula. |
I have tried in MATLAB, do not need HHO code in front of the formula part, behind the drawing part of the direct generation of images. So python, too, draws an image without the HHO principle code. |
Two questions regarding HHO:
EvoloPy/optimizers/HHO.py
Line 117 in db8424f
if objf(X1)< fitness:
fitness is set in the loop at the beginning.
It is the value of the last X vector, is this intentional?
Why the last one?
EvoloPy/optimizers/HHO.py
Line 92 in db8424f
X[i,:]=(Rabbit_Location - X.mean(0))-random.random()*((ub-lb)*random.random()+lb)
is consistent with the paper but still looks strange.
(ub-lb)*random.random()+lb is a random phenotype vector whithin the bounds.
Think about bounds
lb = [100000]*dim
ub = [100001]*dim
then random.random()*((ub-lb)*random.random()+lb) probably doesn't do what it should.
My feeling is that the scaling should be for the genotype, not the phenotype.
In https://github.com/7ossam81/EvoloPy/blob/master/benchmarks.py all boundaries
are around 0 or have lower bound 0. May be you can include a test with shifted boundaries
for an existing test function:
f(x) = ...
shift_f(x) = f(100000 + x)
shift_f should be equivalent to f with shifted boundaries.
Cheers Dietmar
The text was updated successfully, but these errors were encountered: