I'm a math major and a professor in a philosophy class I was in recently discussed this as an example of a definition or assumption that mathemeticians use. My question is, since this is true for lower level math courses (I've only taken up to Lin. Alg.), is it still a definition, or has a proof been offered for it?
(Again, this is question has mostly a kind of a graduate-level audience. Please don't respond with some stupid crap about how 1+1 can't equal 3, therefore it's true, or anything like that.)
2006-10-28
16:07:12
·
11 answers
·
asked by
topher8128
2
in
Science & Mathematics
➔ Mathematics
[[Added]] Thanks for your answers. I had lost some faith in Yahoo Answers and am quite satisfied with the responses, although someone deemed it appropriate to give me a thumbs down :).
2006-10-28
16:21:03 ·
update #1
Well, yes, sort of. The proof that 1+1=2 works by first formulating arithmetic in the language of set theory: natural numbers are defined as certain types of sets, and addition of natural numbers are defined in terms of those sets, whereupon it can be shown from the axioms of set theory that applying the addition operation to the sets for "1" and "1" does indeed yield the set for "2."
Note that I say it's only sort of proven. The reason I say that is that if you don't already believe that 1+1=2, you're never going to accept the (re)definition of natural numbers in terms of sets, so the whole thing is really just a gigantic exercise in begging the question, not unlike using the fact that sin (x+y) = sin x cos y + sin y cos x to prove that sin x = cos (π/2-x).
2006-10-28 16:17:14
·
answer #1
·
answered by Pascal 7
·
1⤊
0⤋
I got a math degree many years ago, and never really had a course where this was addressed directly.
But I came to view the truth of 1+1=2 as deriving from the definition of counting and the definition of addition. As in all mathematical proofs, you have to start with definitions and assumptions (or axioms, or postulates, or whatever you choose to call them). And then you see what you can prove from them.
2006-10-28 16:15:46
·
answer #2
·
answered by actuator 5
·
0⤊
0⤋
The algebra interior the inductive step would be simplified if we notice that (a million + 2 + ... + n)^2 = [n(n+a million)/2}^2. Now in case you assume that a million^3 + 2^3 + ... + n^3 =[n(n + a million)/2]^2, then a million^3 + 2^3 + ... + n^3 + (n + a million)^3 = [n(n + a million)/2]^2 + (n + a million)^3 = n^2(n + a million)^2/4 + (n + a million)^3 = (n + a million)^2(n^2/4 + n + a million) = (n + a million)^2[(n^2 + 4n + 4)/4] = = [(n + a million)(n + 2)/2]^2.
2016-10-03 01:51:33
·
answer #3
·
answered by ? 4
·
0⤊
0⤋
the correllation between the elements are thus ,place one object within a circle =1, add one more =2, however two creates a set ,one set to be precise, therefore the set is one while the elements are numerically assigned the value of two. if (e(s) plus n are of same set the sum is an exact set, if elements are unequal then they are an incomplete set or abstract set. It is also possible to arrive at different sum (answer by distinguishing set features such as how many pens and pencils on my desk = n add feature how many pens, pencils, erasers and paper clips and you arrive at a different sum even if items or elements are in a focus (corral) thus e+e= addition criteria / static criteria of addition relies on determinants that put a generally applied value to elements such as e1 is 1 and e2 is 1 and plus means to combine the numeric value creating a new set or sum
2006-10-28 16:20:39
·
answer #4
·
answered by Book of Changes 3
·
0⤊
0⤋
Given any integer n, there is an interger n' such that n'=n+1. In the case of n=1, we DEFINE n' to be 2.
It is a definition, as much as 1 is the unique number such that 1•n=n for all n. In this case, we know that there is such a number (since real/complex numbers form a field), and we have defined the name of this number to be 1.
2006-10-28 16:15:35
·
answer #5
·
answered by Eulercrosser 4
·
0⤊
0⤋
Well, that's only for sufficiently large values of 1 anyway. ;-)
The answer is yes, it's supposed to have been proven. Bertrand Russell's massive Principia Mathematica spent about 300 pages leading to that proof from first principles:
http://en.wikipedia.org/wiki/Principia_Mathematica
2006-10-28 16:14:18
·
answer #6
·
answered by arbiter007 6
·
0⤊
0⤋
I believe "Principia Mathematica" by Alfred North Whitehead and Bertrand Russell addresses this. They prove it painstakingly, starting with the most fundamental basics of mathematical philosophy.
2006-10-28 16:15:48
·
answer #7
·
answered by banjuja58 4
·
0⤊
0⤋
No it cant be proved. It is one of the axiomas of Peano.
You can ofcourse reformulate the axioms and then 'prove' that 1 + 1 = 2.
2006-10-28 16:39:42
·
answer #8
·
answered by gjmb1960 7
·
0⤊
0⤋
If you look at it in general we are taught that 1+1=2. But if you were to look at it another way like I would teach my tofddler that you take 1 apple and add another apple and you get 2 apples. So, I believe that in that sense it has been proven just as is.
2006-10-28 16:36:17
·
answer #9
·
answered by scsrls12 1
·
0⤊
2⤋
yes. pick up any basic text on number theory.
you might want to also familiarize yourself with kurt gödel's incompleteness theorem because you might be interested in the question asked and answered by that theorem
2006-10-28 16:11:05
·
answer #10
·
answered by Anonymous
·
0⤊
0⤋