English Deutsch Français Italiano Español Português 繁體中文 Bahasa Indonesia Tiếng Việt ภาษาไทย
All categories

Let's say I play the lottery twice, with odds of 1 in a million if I play once. What is the odds of winning ?if I play twice? Three times, etc.? Is it just 2 and 3 in a million, respectively? If I play a million times do I have a 1 in 1 chance of winning?

thanks.

2007-08-05 10:47:07 · 7 answers · asked by ebillar 1 in Games & Recreation Gambling

7 answers

Your odds are exactly as they were the first time you played. Odds are Odds they don't change with each play. If having played once removed that probability then the odds would be changed. Since the probability of the first result still exists, nothing is changed.

2007-08-05 10:56:19 · answer #1 · answered by Anonymous · 0 0

The first poster has the right idea but I am gonna expand on it.

If you play once, it's one in a million.

twice...very close to two in a million

thrice...very close to three in a million

But when you play a million times, the chances of winning are not 100%. Because its possible you can win more than once...so obviously you can win less than once... i.e. zero times.

So what is the probability of winning once or more if you have played the lottery 1 million times and the probability is 1 in a million?

Well the probability of losing in any given drawing is 999,999/1,000,000

Each drawing is independent so you just multiply the probability for each individual drawing. For example to lose 6 drawings in a row, the probability would be: (999,999/1,000,000)^6 = 0.99999400001499998
You subtract this number from 1 to find the probability of winning AT LEAST once.

To win, its 0.000005999985000019999985 so its slightly less than 6 in 1 million.

So to win the lottery AT LEAST once in a million drawings is 1 - (999,999/1,000,000)^1,000,000 = 0.63212
0.63212 is approximately 1 - 1/e

e = 2.71828459045...for true math dorks this number is just as important as pi

http://en.wikipedia.org/wiki/E_(mathematical_constant)

2007-08-07 00:08:10 · answer #2 · answered by Doug 5 · 0 0

The odds of the lottery are set by the number of different number combinations that are possible. So if the back of your ticket says your odds are one in a million then there are one million different number combinations possible and your ticket is one of these combinations. Hence one in a million. Now if you bought a second ticket you would possess two of the one million possible combinations, and your odds would be two in a million. If you bought one million tickets, with all different number combinations, then your chance to win the lottery would be 1:1 because you posses all of the possible combinations. If you buy a million tickets, with different combinations, in a lottery that has 1:sixmillion odds, then your odds are 1:6.


I believe the question is what if he plays the lottery twice in the same drawing, not in seperate drawings. My answer involves the same drawing while yours involves seperate drawings.

2007-08-06 12:19:26 · answer #3 · answered by brandon m 1 · 0 0

Nobody seems to understand what the asker is looking for.

If you play the lottery one million times then the odds of you winning are EVEN. True that each separate play is 1:1,000,000 but if you put all attempts into the collective then the payoff odds of you winning are 100%. This does not mean you will win however. This only means that the payoff equals the chance of playing.

2007-08-06 01:49:05 · answer #4 · answered by closetcoon_fan 5 · 0 1

I'm removing my answer.

The short answer is that you will never have a 1 in 1 chance of winning.

To figure what your probability is, you need to find the chance that you won't win any specific drawing (999999 out of a million), then multiply it out through the number of drawings, then subtract that from one. With a million drawings, you'll have a lot of decimal places!

The other short answer is that you need to remember is that you will never have positive expected value on a lottery. Lotteries usually have a 50% house edge, meaning that the state will end up with half of the money that is wagered.

2007-08-06 00:39:42 · answer #5 · answered by John F 6 · 0 0

Yup You're right.

However if you play one million different times it's different. To find your probability of winning if you bought tickets for one million drawings is that you would take 999,999/1,000,000 to the millionth power. Then subtract that number from one.

2007-08-05 17:51:56 · answer #6 · answered by alwaysmoose 7 · 1 1

It is a gambler's fallacy that they get you to believe, but in all honesty the chances do not increase the more times you play. Here is a simple example... Lets say I flip a coin 1 million times.... EVERY SINGLE ONE COMES UP HEADS.... you would think that the next flip would have to favor tails because it has been heads the whole time.... not so... the 1,000,001st flip is still 50/50 odds.

I saw something about how roulette was one of the least played games back in the day until in the 80's they put up a digital board that shows the last 40 numbers that had come up. This board made roulette popular again because people would look and what numbers had hit and would play accordingly (red if a lot of black numbers had hit, low numbers if a lot of high numbers had hit) thinking that they would have a better chance of winning. They don't...



here is the actual definition...

The gambler's fallacy is a formal fallacy. It is the incorrect belief that the likelihood of a random event can be affected by or predicted from other, independent events.

The gambler's fallacy gets its name from the fact that, where the random event is the throw of a die or the spin of a roulette wheel, gamblers will risk money on their belief in "a run of luck" or a mistaken understanding of "the law of averages". It often arises because a similarity between random processes is mistakenly interpreted as a predictive relationship between them. (For instance, two fair dice are similar in that they each have the same chances of yielding each number - but they are independent in that they do not actually influence one another.)

The gambler's fallacy often takes one of these forms:

A particular outcome of a random event is more likely to occur because it has happened recently ("run of good luck");
A particular outcome is more likely to occur because it has not happened recently ("law of averages" or "it's my turn now").
Similarly

A particular outcome is less likely to occur because it has happened recently ("law of averages" or "exhausted its luck");
A particular outcome is less likely to occur because it has not happened recently ("run of bad luck").
A more subtle version of the fallacy is that an "interesting" (non-random looking) outcome is "unlikely" (eg that a sequence of "1,2,3,4,5,6" in a lottery result is less likely than any other individual outcome). Even apart from the debate about what constitutes an "interesting" result, this can be seen as a version of the gambler's fallacy because it is saying that a random event is less likely to occur if the result, taken in conjunction with recent events, will produce an "interesting" pattern.

Contents [hide]
1 An example: coin-tossing
2 Other examples
3 Non-examples
4 See also
5 External links



[edit] An example: coin-tossing
The gambler's fallacy can be illustrated by considering the repeated toss of a coin. With a fair coin the chances of getting heads are exactly 0.5 (one in two). The chances of it coming up heads twice in a row are 0.5×0.5=0.25 (one in four). The probability of three heads in a row is 0.5×0.5×0.5= 0.125 (one in eight) and so on.

Now suppose that we have just tossed four heads in a row. A believer in the gambler's fallacy might say, "If the next coin flipped were to come up heads, it would generate a run of five successive heads. The probability of a run of five successive heads is (1 / 2)5 = 1 / 32; therefore, the next coin flipped only has a 1 in 32 chance of coming up heads."

This is the fallacious step in the argument. If the coin is fair, then by definition the probability of tails must always be 0.5, never more or less, and the probability of heads must always be 0.5, never less (or more). While a run of five heads is only 1 in 32 (0.03125), it is 1 in 32 before the coin is first tossed. After the first four tosses the results are no longer unknown, so they do not count. The probability of five consecutive heads is the same as four successive heads followed by one tails. Tails is no more likely. In fact, the calculation of the 1 in 32 probability relied on the assumption that heads and tails are equally likely at every step. Each of the two possible outcomes has equal probability no matter how many times the coin has been flipped previously and no matter what the result. Reasoning that it is more likely that the next toss will be a tail than a head due to the past tosses is the fallacy. The fallacy is the idea that a run of luck in the past somehow influences the odds of a bet in the future. This kind of logic would only work, if we had to guess all the tosses' results 'before' they are carried out. Let's say we are gambling on a HHHHH result, that is likely to constitute the significantly lesser chance to succeed.

As an example, the popular doubling strategy (start with $1, if you lose, bet $2, then $4 etc., until you win) does not work; see Martingale (betting system). Situations like these are investigated in the mathematical theory of random walks. This and similar strategies either trade many small wins for a few huge losses (as in this case) or vice versa. With an infinite amount of working capital, one would come out ahead using this strategy; as it stands, one is better off betting a constant amount if only because it makes it easier to estimate how much one stands to lose in an hour or day of play.

A joke told among mathematicians demonstrates the nature of the fallacy. When flying on an airplane, a man decides to always bring a bomb with him. "The chances of an airplane having a bomb on it are very small," he reasons, "and certainly the chances of having two are almost none!".

Some claim that the gambler's fallacy is a cognitive bias produced by a psychological heuristic called the representativeness heuristic. There is an argument that we are programmed to look for patterns in chaos ("Is that a tiger half-hidden in the trees?" "Is that a bunch of ripe fruit half-hidden in the leaves?") and are actually biased towards spotting patterns when none exist. An animal that is prone to over-imagining patterns (e.g., never misses real tigers, but sometimes sees imaginary ones) is far more likely to pass on its genes than a cousin which ignores just one real tiger.


[edit] Other examples
What is the probability of flipping 21 heads in a row, with a fair coin? (Answer: 1 in 2,097,152 = approximately 0.000000477.) What is the probability of doing it, given that you have already flipped 20 heads in a row? (Answer: 0.5.) See Bayes' theorem.
Are you more likely to win the lottery jackpot by choosing the same numbers every time or by choosing different numbers every time? (Answer: Either strategy is equally likely to win.)
Are you more or less likely to win the lottery jackpot by picking the numbers which won last week, or picking numbers at random? (Answer: Either strategy is equally likely to win.)
(This does not mean that all possible choices of numbers within a given lottery are equally good. While the odds of winning may be the same regardless of which numbers are chosen, the expected payout is not, because of the possibility of having to share that jackpot with other players. A rational gambler might attempt to predict other players' choices and then deliberately avoid these numbers.)


[edit] Non-examples
There are many scenarios where the gambler's fallacy might superficially seem to apply but does not, including:

When the probability of different events is not independent, the probability of future events can change based on the outcome of past events. Formally, the system is said to have memory. An example of this is cards drawn without replacement. For example, once a jack is removed from the deck, the next draw is less likely to be a jack and more likely to be of another rank. Thus, the odds for drawing a jack, assuming that it was the first card drawn and that there are no jokers, have decreased from 4/52 (7.69%) to 3/51 (5.88%), while the odds for each other rank have increased from 4/52 (7.69%) to 4/51 (7.84%).
When the probability of each event is not even, such as with a loaded die or an unbalanced coin. The Chernoff bound is a method of determining how many times a coin must be flipped to determine (with high probability) which side is loaded. As a run of heads (or, e.g., reds on a roulette wheel) gets longer and longer, the chance that the coin or wheel is loaded increases.
The outcome of future events can be affected if external factors are allowed to change the probability of the events (e.g. changes in the rules of a game affecting a sports team's performance levels). Additionally, an inexperienced player's success may decrease after opposing teams discover his or her weaknesses and exploit them. The player must then attempt to compensate and randomize his strategy. See Game Theory.
Many riddles trick the reader into believing that they are an example of Gambler's Fallacy, such as the Monty Hall problem.

2007-08-05 19:24:50 · answer #7 · answered by Anonymous · 0 0

fedest.com, questions and answers