NMinimize, NMaximize, Minimize and Maximize employ global optimization algorithms, and are thus suitable when a global optimum is needed.
Minimize and Maximize can find exact global optima for a class of optimization problems containing arbitrary polynomial problems. However, the algorithms used have a very high asymptotic complexity and therefore are suitable only for problems with a small number of variables.
Maximize always finds a global maximum, even in cases that are numerically unstable. The left-hand side of the constraint here is (x2+y2-1010)2 (x2+y2).
In[1]:=
Maximize[{x + y,
100000000000000000000 x^2 - 20000000000 x^4 + x^6 +
100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -
20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}] // N[#, 20] &
Out[1]=
This input differs from the previous one only in the twenty-first decimal digit, but the answer is quite different, especially the location of the maximum point. For an algorithm using 16 digits of precision both problems look the same, hence it cannot possibly solve them both correctly.
In[2]:=
Maximize[{x + y,
100000000000000000001 x^2 - 20000000000 x^4 + x^6 +
100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -
20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}] // N[#, 20] &
Out[2]=
NMaximize, which by default uses machine-precision numbers, is not able to solve either of the problems.
In[3]:=
NMaximize[{x + y,
100000000000000000000 x^2 - 20000000000 x^4 + x^6 +
100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -
20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}]
Out[3]=
In[4]:=
NMaximize[{x + y,
100000000000000000001 x^2 - 20000000000 x^4 + x^6 +
100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -
20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}]
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-bwrizk
Out[4]=
FindMinimum only attempts to find a local minimum, therefore is suitable when a local optimum is needed, or when it is known in advance that the problem has only one optimum or only a few optima that can be discovered using different starting points.
Even for local optimization, it may still be worth using NMinimize for small problems. NMinimize uses one of the four direct search algorithms (Nelder-Mead, differential evolution, simulated annealing, and random search), then finetunes the solution by using a combination of KKT solution, the interior point, and a penalty method. So if efficiency is not an issue, NMinimize should be more robust than FindMinimum, in addition to being a global optimizer.
This shows the default behavior of NMinimize on a problem with four variables.
In[5]:=
Click for copyable input
Clear[f, x, y, z, t];
f = -Log[x] - 2 Log[y] - 3 Log[y] - 3 Log[t];
cons = {200 x^2 + y^2 + z^2 + t^2 == 100, x > 0, y > 0, z > 0,
t > 0};
vars = {x, y, z, t};
sol = NMinimize[{f, cons}, vars]
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-hhk86i
Out[9]=
This shows that two of the post-processors, KKT and FindMinimum, do not give the default result. Notice that for historical reasons, the name FindMinimum, when used as an option value of PostProcess, stands for the process where a penalty method is used to convert the constrained optimization problem into unconstrained optimization methods and then solved using (unconstrained) FindMinimum.
In[10]:=
Click for copyable input
sol = NMinimize[{f, cons}, vars,
Method -> {NelderMead, PostProcess -> KKT}]
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-dh0l9l
Out[10]=
In[11]:=
Click for copyable input
sol = NMinimize[{f, cons}, vars,
Method -> {NelderMead, PostProcess -> FindMinimum}]
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-f30wr2
Out[11]=
However, if efficiency is important, FindMinimum can be used if you just need a local minimum, or you can provide a good starting point, or you know the problem has only one minimum (e.g., convex), or your problem is large/expensive. This uses FindMinimum and NMinimize to solve the same problem with seven variables. The constraints are relatively expensive to compute. Clearly FindMinimum in this case is much faster than NMinimize.
In[12]:=
Click for copyable input
Clear[f, cons, vars, x];
{f, cons,
vars} = {(20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(
x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(
x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(
x[3] x[6]^2), {0.1` <= x[1] <= 10, 0.1` <= x[2] <= 10,
0.1` <= x[3] <= 10, 0.1` <= x[4] <= 10, 0.1` <= x[5] <= 10,
0.1` <= x[6] <= 10, 0.01` <= x[7] <= 10,
1 - (0.2` x[3] x[6]^(2/3) x[7]^0.25`)/(x[2] x[4]^0.5`) - (
0.7` x[1]^3 x[2] x[6] x[7]^0.5`)/x[3]^2 - (
0.5` x[1]^0.5` x[7])/(x[3] x[6]^2) >= 0,
1 - (3.1` x[2]^0.5` x[6]^(1/3))/(x[1] x[4]^2 x[5]) - (
1.3` x[2] x[6])/(x[1]^0.5` x[3] x[5]) - (0.8` x[3] x[6]^2)/(
x[4] x[5]) >= 0,
1 - (x[2] x[3]^0.5` x[5])/x[1] - (0.1` x[2] x[5])/(
x[3]^0.5` x[6] x[7]^0.5`) - (2 x[1] x[5] x[7]^(1/3))/(
x[3]^1.5` x[6]) - (0.65` x[3] x[5] x[7])/(x[2]^2 x[6]) >= 0,
1 - (0.3` x[1]^0.5` x[2]^2 x[3] x[4]^(1/3) x[7]^0.25`)/x[5]^(
2/3) - (0.2` x[2] x[5]^0.5` x[7]^(1/3))/(x[1]^2 x[4]) - (
0.5` x[4] x[7]^0.5`)/x[3]^2 - (0.4` x[3] x[5] x[7]^0.75`)/(
x[1]^3 x[2]^2) >=
0, (20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(
x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(
x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(
x[3] x[6]^2) >=
100, (20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(
x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(
x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(
x[3] x[6]^2) <= 3000}, {x[1], x[2], x[3], x[4], x[5], x[6],
x[7]}};
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-e70dom
In[14]:=
Click for copyable input
FindMinimum[{f, cons}, vars] // Timing
http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-kgtj4a
Out[14]=
In[15]:=
Click for copyable input
NMinimize[{f, cons}, vars] // Timing
Out[15]= Hope this helps!
2007-05-10 15:18:18
·
answer #2
·
answered by mk_freak90 2
·
0⤊
3⤋