English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Hi, I have a slightly advanced (graduate level) math problem i wish you can help me.

Suppose I have an n x n matrix A in which each of its column is a column vector [a_1,a_2,...a_n], and these a_i's are linearly independent. But a_1 and a_2 are ALMOST parallel in the sense that the magnitude of (a_1,a_2) (i.e their inner producT) is greater than or equal to the product of the magnitude of each vector an a factor of (1-e). I can rewrite this as

|(a_1,a_2)| >= ||a_1||.||a_2||.(1-e).

prove that ||A||.||A^-1||<=1/sqrt(e) , where A^-1 = inverse of A. What i don't understand is how to relate eigenvalues of A with the magnitude of a_1 and a_2. I also don't know how to find eigenvalues of A, I can try schur decomposition but i don't know U in which U^-1AU that does it (so as to make U^-1AU upper triangular)


||A||= sqrt(max of |lambda(A^TA)|) where lambda= eigenvalue and A^T= transpose of A

2007-04-15 03:40:12 · 2 answers · asked by Mulyadi T 1 in Science & Mathematics Mathematics

2 answers

Since you haven't said otherwise, I'm going to assume that A is over the field of reals (if it has complex elements, it's another, bigger, can of worms ☺). Probably the best and easiest way to get eigenvalues is to do an LU decomposition of the matrix. That leaves the eigenvalues as the elements of the principle diagonal and there's no shortage of 'canned' software to do the job.

Another problem is that the eigenvectors need not always be distinct (that is, not always corresponding with a unique eigenvalue).

There are some very good words on the subject in 'Numerical Recipes' (Press, et al, Cambridge University, 1986) and an excellent (if rather deep) analysis in 'Elementary Matrices' (Frazer, et al, Cambridge University, 1938).

It's a fairly deep subject (and one that I haven't taught for a *very* long time ☺) so good luck.

HTH

Doug

2007-04-15 04:06:24 · answer #1 · answered by doug_donaghue 7 · 0 0

While there are fine programs that decompose matrices (and writing such a program isn't hard if you have programming skills), the question asks for a proof...there isn't a matrix to decompose, all matrices which have to column vectors within a small angle of each other have the product of the largest e-values bounded by 1/sqrt(e).

My suggestion (details of which I am still working on) is to start with examples: e.g., can you prove for epsilon=1 then for e=1/2, etc. Also see if it is true (numerically) for large matrices. Try to prove it for a 2x2 matrix, then a 3x3. These are places where I would start. See if some concept doesn't "happen."

I will post again if it starts working for me.......

Say A=diagonal(1,2) so A^T=A and A^T*A=diag(1,4) and ||A||=max evalue of A^TA =4. A^-1=diag(1,1/2) with ||A|| = max evalue of diag(1,1/4) = 1 so ||A||*||A^-1|| = 4. The dot product is 0 so then e>=1 and 1/sqrt(e)<=1. Not true for all epsilon.

Did you have abound on epsilon?

2007-04-16 05:59:02 · answer #2 · answered by a_math_guy 5 · 0 0

fedest.com, questions and answers