English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

6 answers

No.

The family of functions:

k e^x

for any number k all have this property. However, those are the only such functions.

You see, the question you're asking is a differential equation:

y' = y

In differential equations, you can learn that a first order (meaning single derivatives) differential equation has one "base" solution. Any multiple of that solution also works. But those are the only solutions.

2007-09-13 09:05:15 · answer #1 · answered by сhееsеr1 7 · 1 0

Actually, f(x) = e^x is the only function that satisfies

f'(x) = f(x) for every real (and even complex) x, and
f(0) = 1.

To see this, let's consider the real case. The complex one is a bit more complicated. Suppose such f exists (we haven't proved it exists, yet). Then, by a simple induction, we see it has derivatives of all orders on R and that

d^n/dx^f(x) = f(x) for all n. Therefore, all such derivatives take on the value 1 at x = 0.

Taylor’s theorem is, therefore, applicable, and we can expand f(x) aroun 0. So, for every n >=1 and every real x <>0,

f(x) = f(0) + x f'(0) + x^2/2! f''(0) ...+ x^n/n!. f_n(0) + R(n, x).

Here, f_n means the derivative of order n and R(n, x) is the Lagrange Remainder, that depends on n and x. In virtue of what we've seen, it follows that

f(x) = 1 + x + x^2/2! ....+ x^n/n! + R(n,x)

We know R(n, x) = x^(n+1)/(n+1)! f_(n+1)(a) = x^(n+1)/(n+1)! f(a), where a is a number between 0 and x, which depends on n and x. Since f is continuous on R (because it's differentiable), f is bounded on the compact interval with endpoints 0 and x. So, for every n there exists M, depending only on x, such that |f(u)| < M for every u in the mentioned interval. It follows that

|R(n, x)| < M |x^(n+1)/(n+1)!| (1)

for every n. According to a known result, easy to prove, for fixed x x^(n+1)/(n+1)! --> 0 as n --> 00. In virtue of (1) it then follows that lim n --> oo R(n, x) = 0 for every x. Since the Lagrange Remainder goes to 0, it follows f can be expanded in infinite Taylor series, and we conclude that, if our f exists, then it must be given by

f(x) = 1 + x + x^2/2! + x^3/3!.....No other definition for f can satisfy our conditions.

Now, it remains to prove this f really exists. Let g(x) = 1 + x + x^2/2!...+ x^n/n!.... for x in R. Then, g is given by a power series, and the ratio test shows this series converges for every real x. So, g is well defined.

Now, let's check that g satisfies g'(x) = g(x) for every real x and g(0) = 1. According to the properties of power series, g is differentiable and it's derivative is given by the series obtained by differentiating each term of g. So

g'(x) = 1 + x + x^2/2! + x^3/3!..., that is, we get the same series and, therefore g'(x) = g(x) for every real x. And it's immediate that g(0) = 1.

So, we have f = g. And we've just proved there's one and only one real valued f satisfying

f'(x) = f(x) for every real x, and
f(0) = 1.

No other function satisfy both conditions.

As you know 1+ 1/2! + 1/3! ....is the famous Euler number e. And f is usually written as e^x. We can prove that, on the rationals, this definition of the exponencial function agrees with the definition e^(m/n) = (e^(m))^(1/n).

If you multiply f by a constant, the functions like h(x) = k f(x) also satisfy h'(x) = h(x) for every real x. But h(0) = k.

Such conclusions are also valid in the complexes.

Hope this helps.

2007-09-13 16:55:31 · answer #2 · answered by Steiner 7 · 0 1

This could be due to the uniqueness of e. If you differentiate a^x numerically using first principles with a=1,2,3... and h=0.1, 0.01,0.001, ... you'll see that between 2 and 3. the derivative changes from being smaller than the function to being larger. This means somewhere along the derivative equals the function. This can only happen once since the function a^x has no turning points. The scientist called this number e probably after Euler.

2007-09-13 16:22:53 · answer #3 · answered by marcus101 2 · 0 2

Well, obviously, it isn't, because 10(e^x) has 10(e^x) as derivative, and f(x) = 0 is its own derivative.

But if f(x) = f'(x), then f(x) = Ce^x for some constant C (well, specifically, C=f(0).

One way to prove this is to notice that g(x) = f'(x)/f(x). Then g(x) is the derivative of ln(f(x)). So g(x) = 1 implies the derivative of ln(f(x)) is 1, so ln(f(x)) must be x+c for some constant c, or f(x) is e^(x+c) = e^c*e^x = C*e^x.

But that only works if we know f(x)>0 for all x.

2007-09-13 16:11:59 · answer #4 · answered by thomasoa 5 · 0 0

It was designed to have that property. Have you heard of power series representation? They are simply polynomials equal to other functions.

For e^x, the power series is 1+x+(x^2)/2!+(x^3)/3!+... You should be able to see that should this series continue idefinitely, its derivative is the same as itself.

2007-09-13 16:07:35 · answer #5 · answered by Anonymous · 0 2

because f(x)=ke^x is the only solution to the equation
f'(x)=f(x).

2007-09-13 16:12:11 · answer #6 · answered by Alberd 4 · 0 2

fedest.com, questions and answers