As previous respondents have mentioned, the concept of dot product only exists within a given vector space. Also, many different dot products can be put on vector spaces. For instance, if we think of polynomials of degree at most 3, these form a vector space with basis {1,x,x^2,x^3}, i.e., it consists of all polynomials formed from these few monomials by adding them together with coefficients, so polynomials of the form ax^3+bx^2+cx+d. Then a reasonable dot product to impose on these would resemble the dot product you first encounter. We can dot (a1x^3+b1x^2+c1x+d1) and (a2x^3+b2x^2+c2x+d2) by multiplying together the corresponding coefficients and adding: we get a1a2+b1b2+c1c2+d1d2. In this case, the dot product of x and x^3 would be 0, so they are indeed orthogonal.
However, as previous respondants have noted, when the vector spaces in question consist of FUNCTIONS, dot products are very frequently defined by integrating products of these functions. Another instance of polynomials which form a complete orthonormal basis with respect to such a dot product are the Legendre Polynomials.
Finally, to answer your question about what it means to be complete, there are, unfortunately, at least two different meanings that people use. One meaning is that if you have a collection of linearly independent vectors {v_i} in some vector space with a dot product defined, and if no nonzero vector is orthogonal to all of the v_i simultaneously, then such a set is sometimes referred to as complete. However, when working in a space with a dot product and a some concept of distance between vectors, sometimes a set of vectors {v_i} is considered to be complete if, given any vector in the space, you can find linear combinations \sigma c_i v_i that get arbitrarily close to your given vector. In other words, you can approximate vectors in the space arbitrarily closely with vectors from the subspace generated by the v_i's. If your system of vectors is "complete" in terms of this latter definition, then it also automatically satisfies the former defintion.
In any case, dot products are never "infinite" -- they almost always are considered to take values in the real numbers or the complex numbers, and infinity is not in either of these sets of numbers.
2006-07-13 15:08:29
·
answer #1
·
answered by mathbear77 2
·
4⤊
0⤋
You have to work in a vector space to deal with dot products. Is your vector space R^2?
You are not using the standard vector spaces such as R^2. A vector space V over a field F, is a space with addition and scalar multiplication.
The set of all polynomials of degree ≤n (actually, you don't need the degree statement) with coefficients from F forms a vector space. Take F as R and n=2, then you are looking at the polynomials with real coefficients and degree ≤ 2. It is easy to see that {1, x, x^2} forms a basis (since any polynomial of degree ≤ 2 can be formed by linear combinations of these polynomials).
Use the given definition of dot product (a,b)=(0,1), w(x)=1:
<1,x>=∫xdx=x^2/2=1/2 Therefore 1 and x are not orthogonal.
likewise, none of these are orthogonal to each other (thus this is not an orthogonal basis). But we can use the Gram-Schmidt process to build an orthogonal basis (we can also scale along the way to build an orthonormal basis):
v(1)=1, ||v(1)||=1
v(2)'=x-= x-1/2 (from above), ||v(2)'||^2=∫(x^2-x+1/4)dx=1/3x^3-1/2x^2+1/4x=1/12, thus define v(2)=(2√3)v(2)', then ||v(2)||=1
v(3)'=x^2-- v(2) = x^2-1/3-1/12(x-1/2) = x^2-x/12-7/24
||v(3)'||^2=91/2880
let v(3)=(8√5)/3v(3)'.
Thus {v(1),v(2),v(3)} forms an orthonormal basis (if I didn't make any mistakes) for the vector space of polynomials of degree ≤ 2 over R.
This can be done for any n (including n≥3, which is required for you question). I hope that I have explained a) how you can have a basis that is formed by polynomials, and b) how you can have an orthonormal basis (which of course had nothing to do with your question).
The reason that they use the integral in the definition of the dot product is because a definite integral is a great operator that takes a polynomial (a vector) into it's coefficient field (a scalar). The dot product of course must take two vectors to a scalar.
2006-07-13 14:26:16
·
answer #2
·
answered by Eulercrosser 4
·
0⤊
0⤋
The question of how polynomials can form a complete basis set has not been addressed yet. They do not form an ordinary vector space basis, but in certain function spaces they do form a basis where infinite expansions are allowed. So, if square integrable functions on [0,1] are being investigated, an orthogonal set of polynomials will be a basis in the sense that every square integrable function can be written as an infinite series of these polynomials that converges in the square mean.
2006-07-13 16:10:37
·
answer #3
·
answered by mathematician 7
·
0⤊
0⤋
For functions, the idea of orthogonality is extended and the dot product becomes an integral. Take a look at the first page of this link, it will explain it. In order for the set of functions (in this case the power series) to be a basis set, it must span all of function space. This idea may be familiar to you if you have studied the Fourier series, which can be used to represent *any* other function. For further reference, see "Fourier Series" by Tolstov. This is an excellent book, it covers more than Fourier, it also discusses the power series and other important series. It also has MANY worked out examples and answers to problems a professor is likely to try to trick you with!
2006-07-13 15:09:03
·
answer #4
·
answered by 1,1,2,3,3,4, 5,5,6,6,6, 8,8,8,10 6
·
0⤊
0⤋
In order to perform the dot product you need to have a space on which to perform it. What is the range you are integrating over? This will need to be specified before you can determine orthogonality.
however x and x^3 will never be orthogonal, no matter what the range.
Orthogonal polynomials are normally defined on a smaller like (0, 1). The chebyschev polynomials are one such family of orthogonal polynomials.
2006-07-13 14:43:18
·
answer #5
·
answered by Paul C 4
·
0⤊
0⤋
you want to be slightly careful. There are 2 procedures to distinct vectors. right here you want the dot product. purely write this out in lengthy sort: [x1, x2, x3] dot [-2, 6, -a million] -2x1 +6x2- x3 = 0. 3 unknowns in a million equations potential there multiple thoughts - an complete planeful. We in ordinary words want one, so allow x1 = a million, x2 = 0 and remedy for x3.
2016-11-06 08:38:23
·
answer #6
·
answered by tine 4
·
0⤊
0⤋
i heard the orthogonality about the vectors and matrices not functions!
2006-07-13 14:27:04
·
answer #7
·
answered by ___ 4
·
0⤊
1⤋
NO.
2006-07-13 16:58:12
·
answer #8
·
answered by DoctaB01 2
·
0⤊
1⤋