All right, I'll take a crack at it this time.
Clearly the only reasonable algorithm is:
(1) Drop the first crystal ball from some increasing sequence of steps s_1, s_2, s_3, ... until it breaks. Say it breaks when you drop it from step s_k.
(2) Drop the second crystal ball from step s_{k-1} + 1, then s_{k-1} + 2, and so on up to s_k - 1 until it breaks.
So the problem boils down to constructing the sequence {s_n}.
Suppose we take s_n = 2n. Then we'll need n/2 trials until the first ball breaks, then 1 using the second ball. So this sequence uses n/2 + 1 steps (in the worst case).
Suppose we take s_n = 3n. Then we'll need n/3 trials until the first ball breaks, then at most 2 using the second ball. So this sequence uses n/3 + 2 steps (in the worst case).
Similarly, if we take s_n = kn, then we'll need n/k trials until the first ball breaks, then at most k - 1 using the second ball. So setting s_n = kn uses n/k + k - 1 steps (in the worst case).
If n is really, really big, then n/1000 + 999 trials is going to be much better than n/999 + 998 trials. So by taking k larger and larger, we obtain a more and more efficient algorithm (for large n).
So I'd say there isn't a "most efficient" algorithm...the sequence of algorithms given by s_n = 2n, s_n = 3n, s_n = 4n, ..., s_n = 1000000n, ..., gives ever-more-efficient algorithms (for large values of n).
Of course, I haven't exhausted all of the possible sequences {s_n}...I've just used ones that grow linearly. But it seems to me that if I used one that grew more quickly, the second ball would have to act linearly in n anyway. So I think an O(n) algorithm is the best you can do (and you can choose the coefficient on n to be as small as you like).
So, pending a proof that it's not possible to solve the problem with an algorithm which is asymptotically less than linear in terms of n, that's my solution.
---EDIT---
Aha! How about we take s_n = n^2? That is, drop from the sequence of steps 1, 4, 9, 16, 25, 36, ...
Then the first ball will take (about) sqrt(n) trials before it breaks. Say it breaks on the kth trial. Then we'll have to test steps (k - 1)^2 + 1, (k - 1)^2 + 2, ..., (k - 1)^2 + 2k. Since (k - 1)^2 < n, then this means the 2k additional steps will be (rough upper bound) 2 sqrt(n).
So if we drop the first ball from steps 1, 4, 9, 16, 25, 36, ... until it breaks, then drop the second ball on the intermediate steps between the last two drops of the first ball, it will take about
sqrt(n) + 2sqrt(n) = 3 sqrt(n)
steps in the worst case. (And O(sqrt(n)) is indeed better than O(n)!)
(Thanks for the hint. Is this right? :) )
2007-04-10 06:34:44
·
answer #1
·
answered by Anonymous
·
1⤊
0⤋
Try skipping numbers. Roll it down 1, if it works, then roll the next down 3. If that breaks the ball, check 2 with the other ball; if 3 doesn't break the first ball, check 5, and repeat until you've broken the balls...
2007-04-10 06:23:16
·
answer #2
·
answered by Walter B 2
·
0⤊
0⤋
Let S(k) is the sequence of ladder steps from which the
n-th trial with the first ball mase, and K(s) is the inverse
function K(S(k)) = k. Lets assume that s is real, rather than
integer, because N is large.
Furthermore, let D(s) be expected distribution of
probabily that s is the breaking step, that probabilty
that N is between s and s+ds is P(s)ds = dD(s).
Obviously D(0) = 0 and D(+infintiy) = 1.
**************
The expected time T(s) (i.e number of trials) needed to
establish N when N is in the vicinity of s is given by
T(s) = K(s) + 1/2 S'(k) = K(s) + 1/2K'(s).
For example S(k) = k², K(s) = âs, K'(s) = 1/(2âs) results in
T(s) = âs + âs = 2âs.
That is, if S(k) is chosen as 1,4,9,16,25... the expected
time is 2âN. I suspect that this is the answer which you
want. This is however wishful thinking only.
**************
However.
Given distribution D(s) total expectated time
= integral of T(s) P(s) ds = int T(s) dD
Optimal sequence So(k) must produce smallest value of ,
which means that any variation of Ko, K = Ko(s) + λV(s) must
have zero derivative with respcet to λ:
d/dλ = 0 for all V(s), such that V(0) = 0.
d/dλ = d/dλ int T(s) dD =
= d/dλ int [K(s) + 1/2K'(s)] dD =
= int d/dλ [K(s) + 1/2K'(s)] dD =
= int d/dλ [Ko(s) + λV(s) + 1/2(Ko(s) +λV(s))'] dD =
= int [V(s) + 1/2 d/dλ 1/(Ko'(s) +λV'(s))] dD =
= int [V(s) - 1/2 V'(s)/(Ko'(s))²] dD =
= int V(s) dD - 1/2 int V'(s)/(Ko'(s))² dD =
= int V(s) dD - 1/2 int V'(s)D'(s)/(Ko'(s))² ds =
= int V(s) dD - 1/2 int D'(s)/(Ko'(s))² dV(s) =
= int V(s) dD - 1/2 V(s)D'(s)/Ko'²(s) + 1/2 int V(s) d [D'(s)/(Ko'(s))²] =
because V(s)D'(s) is 0 both at 0 and at infinity
= int V(s) dD + 1/2 int V(s) d [D'(s)/(Ko(s))²] =
= int V(s) [dD + 1/2 d [D'(s)/(Ko(s))² ] = 0
because V(s) is arbitrary,
dD + 1/2 d [D'(s)/(Ko(s))²] = 0
integrating:
D'(s)/Ko²(s) = C - 2D(s)
Ko'²(s) = 1/2 D'/(1 - D(s))
Ko'(s) = â(D'(s)/2(1-D(s))
Answer:
The solution of this problem does not exist, unless
expected distribution is specified.
Otherwise
********************
Ko(s) = integral ds â(D'(s)/2(1-D(s))
********************
2007-04-10 07:34:43
·
answer #3
·
answered by Alexander 6
·
0⤊
0⤋