English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Find the second degree taylor polynomial for f' about 4 and use it to approximate f'(4.3)

f(x) = 7 - 3(x-4) + 5(x-4)^2 - 2(x-4)^3 + 6(x-4)^4

2007-05-10 15:10:08 · 5 answers · asked by Anonymous in Science & Mathematics Mathematics

5 answers

f(x) = 7 - 3(x-4) + 5(x-4)^2 - 2(x-4)^3 + 6(x-4)^4
so f'(x) = -3 + 10(x-4) - 6(x-4)^2 + 24(x-4)^3
and we want to find the second-order Taylor polynomial for this about x=4.
But since it is already a polynomial in the variable (x-4), we just have to chop off the cubic term:
T(x) = -3 + 10(x-4) - 6(x-4)^2
So T(4.3) = -3 + 10(0.3) - 6(0.09)
= -0.54.
In this case it's not a good approximation, because the next (and last) term evaluates to 0.648. So the correct value for f'(4.3) is 0.108.
So f'(4.3) ≈ -0.54.

2007-05-10 15:21:10 · answer #1 · answered by Scarlet Manuka 7 · 2 0

NMinimize, NMaximize, Minimize and Maximize employ global optimization algorithms, and are thus suitable when a global optimum is needed.
Minimize and Maximize can find exact global optima for a class of optimization problems containing arbitrary polynomial problems. However, the algorithms used have a very high asymptotic complexity and therefore are suitable only for problems with a small number of variables.
Maximize always finds a global maximum, even in cases that are numerically unstable. The left-hand side of the constraint here is (x2+y2-1010)2 (x2+y2).
In[1]:=

Maximize[{x + y,

100000000000000000000 x^2 - 20000000000 x^4 + x^6 +

100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -

20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}] // N[#, 20] &

Out[1]=
This input differs from the previous one only in the twenty-first decimal digit, but the answer is quite different, especially the location of the maximum point. For an algorithm using 16 digits of precision both problems look the same, hence it cannot possibly solve them both correctly.
In[2]:=


Maximize[{x + y,

100000000000000000001 x^2 - 20000000000 x^4 + x^6 +

100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -

20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}] // N[#, 20] &

Out[2]=
NMaximize, which by default uses machine-precision numbers, is not able to solve either of the problems.
In[3]:=

NMaximize[{x + y,

100000000000000000000 x^2 - 20000000000 x^4 + x^6 +

100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -

20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}]

Out[3]=
In[4]:=

NMaximize[{x + y,

100000000000000000001 x^2 - 20000000000 x^4 + x^6 +

100000000000000000000 y^2 - 40000000000 x^2 y^2 + 3 x^4 y^2 -

20000000000 y^4 + 3 x^2 y^4 + y^6 <= 1}, {x, y}]

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-bwrizk
Out[4]=
FindMinimum only attempts to find a local minimum, therefore is suitable when a local optimum is needed, or when it is known in advance that the problem has only one optimum or only a few optima that can be discovered using different starting points.
Even for local optimization, it may still be worth using NMinimize for small problems. NMinimize uses one of the four direct search algorithms (Nelder-Mead, differential evolution, simulated annealing, and random search), then finetunes the solution by using a combination of KKT solution, the interior point, and a penalty method. So if efficiency is not an issue, NMinimize should be more robust than FindMinimum, in addition to being a global optimizer.
This shows the default behavior of NMinimize on a problem with four variables.
In[5]:=
Click for copyable input

Clear[f, x, y, z, t];

f = -Log[x] - 2 Log[y] - 3 Log[y] - 3 Log[t];

cons = {200 x^2 + y^2 + z^2 + t^2 == 100, x > 0, y > 0, z > 0,

t > 0};

vars = {x, y, z, t};

sol = NMinimize[{f, cons}, vars]

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-hhk86i
Out[9]=
This shows that two of the post-processors, KKT and FindMinimum, do not give the default result. Notice that for historical reasons, the name FindMinimum, when used as an option value of PostProcess, stands for the process where a penalty method is used to convert the constrained optimization problem into unconstrained optimization methods and then solved using (unconstrained) FindMinimum.
In[10]:=
Click for copyable input

sol = NMinimize[{f, cons}, vars,

Method -> {NelderMead, PostProcess -> KKT}]

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-dh0l9l
Out[10]=
In[11]:=
Click for copyable input

sol = NMinimize[{f, cons}, vars,

Method -> {NelderMead, PostProcess -> FindMinimum}]

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-f30wr2
Out[11]=
However, if efficiency is important, FindMinimum can be used if you just need a local minimum, or you can provide a good starting point, or you know the problem has only one minimum (e.g., convex), or your problem is large/expensive. This uses FindMinimum and NMinimize to solve the same problem with seven variables. The constraints are relatively expensive to compute. Clearly FindMinimum in this case is much faster than NMinimize.
In[12]:=
Click for copyable input

Clear[f, cons, vars, x];

{f, cons,

vars} = {(20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(

x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(

x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(

x[3] x[6]^2), {0.1` <= x[1] <= 10, 0.1` <= x[2] <= 10,

0.1` <= x[3] <= 10, 0.1` <= x[4] <= 10, 0.1` <= x[5] <= 10,

0.1` <= x[6] <= 10, 0.01` <= x[7] <= 10,

1 - (0.2` x[3] x[6]^(2/3) x[7]^0.25`)/(x[2] x[4]^0.5`) - (

0.7` x[1]^3 x[2] x[6] x[7]^0.5`)/x[3]^2 - (

0.5` x[1]^0.5` x[7])/(x[3] x[6]^2) >= 0,

1 - (3.1` x[2]^0.5` x[6]^(1/3))/(x[1] x[4]^2 x[5]) - (

1.3` x[2] x[6])/(x[1]^0.5` x[3] x[5]) - (0.8` x[3] x[6]^2)/(

x[4] x[5]) >= 0,

1 - (x[2] x[3]^0.5` x[5])/x[1] - (0.1` x[2] x[5])/(

x[3]^0.5` x[6] x[7]^0.5`) - (2 x[1] x[5] x[7]^(1/3))/(

x[3]^1.5` x[6]) - (0.65` x[3] x[5] x[7])/(x[2]^2 x[6]) >= 0,

1 - (0.3` x[1]^0.5` x[2]^2 x[3] x[4]^(1/3) x[7]^0.25`)/x[5]^(

2/3) - (0.2` x[2] x[5]^0.5` x[7]^(1/3))/(x[1]^2 x[4]) - (

0.5` x[4] x[7]^0.5`)/x[3]^2 - (0.4` x[3] x[5] x[7]^0.75`)/(

x[1]^3 x[2]^2) >=

0, (20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(

x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(

x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(

x[3] x[6]^2) >=

100, (20 x[2] x[6])/(x[1]^2 x[4] x[5]^2) + (15 x[3] x[4])/(

x[1] x[2]^2 x[5] x[7]^0.5`) + (10 x[1] x[4]^2 x[7]^0.125`)/(

x[2] x[6]^3) + (25 x[1]^2 x[2]^2 x[5]^0.5` x[7])/(

x[3] x[6]^2) <= 3000}, {x[1], x[2], x[3], x[4], x[5], x[6],

x[7]}};

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-e70dom
In[14]:=
Click for copyable input

FindMinimum[{f, cons}, vars] // Timing

http://wolfram.com/xid/0b5kd70lo7mr4c0ob5la4ikm6hoc16am-kgtj4a
Out[14]=
In[15]:=
Click for copyable input

NMinimize[{f, cons}, vars] // Timing


Out[15]= Hope this helps!

2007-05-10 15:18:18 · answer #2 · answered by mk_freak90 2 · 0 3

From direct calculation,
f'(x)=-3+10(x-4)-6(x-4)^2+24(x-4)^3.

The second degree Taylor polynomial will just be this, but drop the last term.

Then f'(4.3)=-3+10*(.3)-6*(.3)^3=
-3+3-.54=-.54.

2007-05-10 15:16:47 · answer #3 · answered by mathematician 7 · 0 0

lol only two steps... you can do it yourself
f(x) = 7 - 3(x-4) + 5(x-4)^2 - 2(x-4)^3 + 6(x-4)^4
f'(x) = -3 + 10(x-4) - 6(x-4)^2 + 24(x-4)^3
f"(x) = 10 - 12(x-4) + 72(x-4)^2

f(x) = f(4) + f'(4)(x - 4) + f"(4)(x - 4)^2

plug in 4.3 for x when you're done to estimate... can you plug and chug?

2007-05-10 15:19:36 · answer #4 · answered by David K 2 · 0 1

locate derivatives: f(x) = ln(x), at x=a million it is 0 f ' (x) = a million/x, at x=a million this would nicely be a million f '' (x) = -a million/x^2, at x=a million it is -a million f '''(x) = 2/x^3, at x= a million it is two f ''''(x) = -6/x^4, at x=a million it is -6 taylor of f(x) at x=a million is: f(a million) + f '(a million)(x-a million) + f ''(a million)/2! (x-a million)^2 + f '''(a million)/3! (x-a million)^3 + f ''''(a million)/4! (x-a million)^4 + ... = 0 + (x-a million) - (a million/2) (x-a million)^2 + (a million/3) (x-a million)^3 - (a million/4) (x-a million)^4 + ...

2016-10-15 08:20:16 · answer #5 · answered by Anonymous · 0 0

fedest.com, questions and answers