Part 1:
The P_n(z) = a0_n + a1_n * z +..+ am_n * z^m,
where n = 0, 1, 2...
P_n(z_0) => P(z_0)
P_n(z_1) => P(z_1)..
P_n(z_m) => P(z_m-1)
There is a unique polynomial of degree m with the fixed values at z_0, z_1,..z_m-1, by the fundamental theorem of algebra: call this P(z). Then P_n(z_k) => P(z_k) for k = 0, 1, ..m-1. But another way of putting this is:
P_n(z_j) = Sum(k = 0, m-1) [ak_n * (z_j)^k] => P(z_j)
Sum(k=0, m-1)[(z_j)^k *ak_n] => P(z_j)
If the mxm matrix Z is defined:
Z_j_k = (z_j)^k , we find
Sum(k=0, m-1)[Z_j_k * ak_n] = P(z_j)
If we define the vectors:
A_n = (a0_n, a1_n,.. am-1_n)
P_n = (P_n(z_0), P_n(z_1),.. P_n(z_m-1))
P = (P(z_0), P(z_1), .. P(z_m-1)), we find that:
Z * A_n = P_n => P
So, here Z is a matrix with determined elements, P is a vector with determined elements, and A_n is one of a sequence of vectors.
OK, here I am going to go for broad strokes. Assume that Z is invertible (I believe that will be proved to be true under normal circumstances; there might be a couple of special cases you need to work around). Then:
A_n = Z^(-1) * P_n => Z^(-1)*P. So then, as the P_n converge to P, the A_n will converge to Z^(-1)*P. In fact, the A_n converge to exactly the vector A consisting of the coefficients of the polynomial P(z) which matches the values at the z_k.
Since the polynomials P_n are defined by their coefficients, the polynomials P_n must be said to converge to P. (Note that I am talking about convergence in the space of polynomials, not on C: In this context, C is just the supporting space for the space of polynomials, we are not really interested in the values of P_n or P, except at the z_j. What is really of interest is the coefficients.)
Part 2: OK, as with Part 1, I'm only going to try to do this in broad strokes, because I think a full proof would require more "machinery" than I can easily remember and create.
What I think: Within the radius of convergence of the power series, the P_n(z) converge to P(z). Basically, what else can it do? It's behavior is trapped by the convergence of the power series.
However, outside the radius of convergence of the power series, all bets are off. For one thing, P(z) as a function no longer exists, but all the P_n(z) exist everywhere; however, as z => infinity, P_n(z) => infinity as well (polynomials are like that). As an example, consider the function:
1/(1-x) = 1 + x + x^2 + x^3 + ...
which diverges when the absolute value of x gets to 1. So then the P_n(x) have nothing to converge to, although they all exist everywhere; and for radius less than 1, they will converge to the function.
I hope this provides some insight, if not formalism.
2007-10-08 12:47:51
·
answer #1
·
answered by ? 6
·
2⤊
0⤋
Nealjking's answer is very good. I have nothing interesting to add. I just want to point out that, without mentioning it, he implicitly used the so called Lagrange Interpolation Formula, which shows his matrix Z is actually invertible.
As for part 2, well it's not a trivial problem. I'll think more about it
2007-10-09 02:09:51
·
answer #2
·
answered by Steiner 7
·
1⤊
0⤋
The evidence of the type of commuting sequences of polynomials is sketched on web page 436 of "Mathematical Omnibus: Thirty Lectures on classic mathematics" by utilizing D. B. Fuks and Serge Tabachnikov. you'll come across it by utilizing looking Google Books with the words "omnibus" and "sequence of commuting polynomials."
2016-10-20 05:15:19
·
answer #3
·
answered by gayman 4
·
0⤊
0⤋