Sunday, November 16, 2008

Sum of 15

Two players are playing a game and take alternating turns. Initially, there are 9 cards with numbers from 1 to 9 on the table. On each turn, a player takes one of the cards. The first player to have exactly 3 cards with numbers that sum to 15 wins. If no one can after all cards are distributed, then it's a draw.

Can you tell who wins and how to play this game without using a computer to analyze all possible positions?

Saturday, November 15, 2008

Some more SRM experiences

This will probably only be interesting for TopCoder players, and they've probably already seen this on the forum, but anyway. Feel free to scroll over.

This was quite an educating experience from two points of view:
1) The coding times for the problems were 3 min, 52 min and 13 min. In other words, I've spent very much time on the medium, and had to write the hard in a hurry that eventually led to it failing systests. This was not the first time for this to happen, and I've always thought "maybe I need to abandon the medium and go for hard now?" many times, and I've always decided to postpone that decision. I would imagine that such a choice appears more frequently for those who don't usually solve all 3 problems - how do you deal with it?
2) The hard failed because of two bugs. The first one was a loop having a "<" instead of "<=" in the boundary condition, which I think is one-off and hard to prevent systematically. But the second one is very common: an int overflow. And I've always knew that C# can check against that, but somehow haven't been using this feature ("/checked+" compiler option, or the corresponding checkbox in Visual Studio). I will now.

Not much to say here. A quite standard hard that led me to a relatively easy win, a very beautiful medium that I was maybe lucky to solve quickly.

A submit with 10 seconds to go that passes systest, which uses a theorem that I've only had a 'heard-about' knowledge of before the SRM, and after rewriting the solution from scratch at least twice — doesn't look like a dull experience, does it? In retrospect, I shouldn't have even tried writing that DP over all splits of 16 numbered items into groups without finding out that there are billions of those; the Internet connection shouldn't have failed for about 5 minutes when I was struggling with the hard. But maybe because of all that I was finally able to invent a correct solution with about 15 minutes to go, and implement and debug it in that quite exciting last 15 minutes.

Tuesday, November 11, 2008

Long time no see

Hiya!

Today, I've got a small tip that has proved useful several times for me in ACM-style competitions. Suppose you have a program that does some calculations with integers. You've coded it using a 32-bit data type (say, 'int' in C++/Java), and then you figure out that the values you get don't fit into that type. No problem! You can easily switch to a 64-bit data type (say, 'long long' in C++ or 'long' in Java), and your program just works with numbers up to 9*10^18! But... It turns out that such numbers are not big enough as well. What to do now? Go for arbitrarily-long integers? But coding those in C++ takes reasonable time, especially if there're negative numbers involved; in Java, you'd have to rewrite all your code to ugly BigInteger syntax.

There's an intermediate solution that can allow you about 30 decimal digits almost without any hassle (it's only applicable when you use only addition, subtraction and multiplication, but not division): do the calculations with doubles AND longs. The basic idea is that imprecise doubles will give you the first, say, 13 digits correctly, and long will give you the last 18 digits, because it will contain the precise answer modulo 2^64.

More precisely: suppose we have the answer computed in a double variable 'x' and in a long variable 'y'. How do we get the full answer in a String? Here's a Java snippet that should mostly work (please don't use it at TopCoder :)):

 1: public static String build(double x, long y) {
2: String sx = String.format("%.0f", x);
3: if (sx.length() <= 18)
4: return "" + y;
5: sx = sx.substring(0, sx.length() - 18);
6: long sl = 0;
7: for (int i = 0; i < sx.length(); ++i)
8: sl = sl * 10 + sx.charAt(i) - '0';
9: for (int i = 0; i < 18; ++i)
10: sl *= 10;
11: String sy = String.format("%018d", y - sl);
12: return sx + sy;
13: }


Basically, lines 1-5 format all digits but the last 18, lines 6-10 convert the number formed by already formatted digits plus 18 zeroes to a long, and then subtracting that number from y gives us exactly the last 18 digits.

Apart from standard possible double precision issues, this approach has one more flaw - when the digits on the border of long and double are ..99999... or ...00000..., it may fail even if the precision error is tiny. For example, the above code won't correctly format 10^30. You can work around that by using BigInteger (which I intentionally didn't because it should be usable in C++ as well), or by manually adjusting for that case, but that brings too much hassle. Usually, for the numbers that you output in ACM-style problems, the above code is enough, and the joy of ACM is that you can submit, and do the more accurate coding only if you get WA :)

Sunday, November 2, 2008

Burnside's lemma

The last TopCoder SRM (the problem statement is at http://www.topcoder.com/stat?c=problem_statement&pm=9975, but that requires a TopCoder account to view) has inspired me to write about a small and quite easy fact in group theory which I think was the most useful part of group theory for me in programming competitions.

It's called Burnside's lemma and says (citing from Wikipedia): let G be a finite group that acts on a set X. Then the number of orbits is equal to the average number of points fixed by an element of G. What does this all mean and how is this applicable to programming competitions? Let's continue with an example.

A standard problem that is best solved using Burnside's lemma is: consider a circular stripe of n cells. How many ways are there to color these cells with two colors, black and white, up to a rotation? Here, X is a set of all colored stripes (it has 2^n elements), and G is the group of its rotations (it has n elements: rotation by 0 cells, y 1 cell, by 2 cells, etc, by (n-1) cells), and an orbit is exactly the set of all stripes that can be obtained from each other using rotations, so the number of orbits will be the number of distinct stripes up to a rotation. Now let's apply the lemma, and find the number of stripes that are fixed by the rotation by K cells. If a stripe becomes itself after rotating by K cells, then its 1st cell must have the same color as its (1+K modulo n)-th cell, which is in turn the same as its (1+2K modulo n)-th cell, etc, until we get back to the 1st cell when m*K modulo n=0. One may notice that this will happen when m=n/gcd(K,n), and thus we get n/gcd(K,n) cells that must all be of the same color. Analogously, the same amount of cells must be of the same color starting with cell 2, (2+K modulo n), etc. Thus, all cells are separated into gcd(K,n) groups, with each group being of one color, and that yields us 2^gcd(K,n) choices. An by Burnside's lemma, the answer to the original problem is sum(2^gcd(K,n))/n, where the sum is taken over K from 0 to n-1.

That was rather complicated; here's a somewhat simpler example: Consider a square of 2n times 2n cells. How many ways are there to color it into X colors, up to rotations and/or reflections? Here, the group has only 8 elements (rotations by 0, 90, 180 and 270 degrees, reflections over two diagonals, over a vertical line and over a horizontal line). Every coloring stays itself after rotating by 0 degrees, so that rotation has X^(4n^2) fixed points. Rotation by 180 degrees and reflections over a horizonal/vertical line split all cells in pairs that must be of the same color for a coloring to be unaffected by such rotation/reflection, thus there exist X^(2n^2) such colorings for each of them. Rotations by 90 and 270 degrees split cells in groups of four, thus yielding X^(n^2) fixed colorings. Reflections over diagonals split cells into 2n groups of 1 (the diagonal itself) and (2n^2-n) groups of 2 (all remaining cells), thus yielding X^(2n^2-n+2n)=X^(2n^2+n) unaffected colorings. So, the answer is (X^(4n^2)+3*X^(2n^2)+2*X^(n^2)+2*X^(2n^2+n))/8.

I understand that this looks kind of too much formulas for too little gain, but once you get the hang of it, it becomes really simple and easy to use.

And as a plus, you get to verify that you haven't made a bug: the lemma has a division in the end (e.g., the division by 8 in the last formula). If that division produces a remainder, you've miscalculated somewhere. And chances are, if you have made a mistake, feeding the program random testcases will make the formula produce a remainder quite soon.

P.S. The Formula 1 GP at Interlagos starts in 20 minutes! I will be rooting for Massa to win the championship (I don't exactly know why - maybe because he's losing :)). The event is very likely to turn out quite exciting.

Friday, October 24, 2008

Solutions for previous problems

I can't resist to complete my previous questions with answers.

First, about the integral of f' squared being not less than the integral of f squared.

First, I'll write the proof:

0<=int((f'(x)-cot(x)*f(x))^2)=int(f'(x)^2)-2*int(f'(x)*f(x)*cot(x))+int(cot(x)^2*f(x)^2)=A-2B+C, where cot(x) is cos(x)/sin(x).
Let's simplify each of A,B and C by itself. The most interesting is B:
B=int(f'(x)*f(x)*cot(x)), and integration by parts yields B=f(3)*f(3)*cot(3)-f(0)*f(0)*cot(0)-int(f(x)*f'(x)*cot(x)+f(x)*f(x)*(1-cot(x)^2))=-B-int(f(x)*f(x)*(1-cot(x)^2)), thus -2B=int(f(x)*f(x)*(1-cot(x)^2)), and substituting that to the original inequality we get 0<=A-2B+C=int(f'(x)^2)+int(f(x)^2*(1-cot(x)^2))+int(f(x)^2*cot(x)^2)=int(f'(x)^2)-int(f(x)^2), q.e.d.

We've used that 0=f(0)*f(0)*cot(0) (well, we don't have cot(0) really, but we should interpret this as a limit from right), but that's easy because f(0)=0 and it has a derivative, thus is at most linear around zero, thus f(0)*f(0) is at most quadratic, and cot(x) is 1/linear, thus together they still stay at most linear.

This proof won't work if we substitute 3 with something larger than pi (because cot won't be defined at pi, and that integration by parts wouldn't be possible), and it helps to see that if we replace 3 with pi exactly, then equality would be achieved when f(x)=K*sin(x) (one can see from the above proof that when 3 is 3, equality is possible only if f(x) is always zero), and moreover, when 3 is replaced with something more than pi (say, k), then sin(x*pi/k) would provide a counterexample.

But what's bad (and interesting :)) about this proof is that starts from the end. It takes a seemingly weird combination of f'(x) and f(x), and that combination turns out to be exactly what we need. But how do we invent that? I don't know. As Jedal points in comments to the original problem statement, this has to do with f'(x)^2-f(x)^2 having some physical meaning when f(x) is a position of a body attached to a spring as a function of time, and sin(x) being the real trajectory of that body, thus delivering the extremum of that function. But I can't say I understand much of that.

Second, about the problem being solvable faster than O(n^2).

When we look closer at the sum being calculated, we see that everything except gcd is a polynomial on w and h, and sum of poly(w,h) over w from 0 to n is a polynomial over h and n, and repeating that for h, we'll get a polynomial over n and m that can be evaluated in O(1). But how to deal with gcd? We're looking at a sum of (n-w+1)*(m-h+1)*gcd(w,h). Let's iterate over w, then this is reduced to sum of (m-h+1)*gcd(w,h). Since we're already doing O(n) iterations and want to be under O(n^2) total, we need to do that sum faster than O(n). And here's how: let's group its terms where gcd(w,h)=k. Since k is a divisor of w, there can be only so much of those (I can't invent an exact formula now), so if we count all the terms with gcd(w,h)=k in O(1), then we're done. We need to count the amount of values for h such that gcd(w,h)=k, and the sum of those values. When we replace "gcd(w,h)=k" by "k divides gcd(w,h)", the answer is simple - we just need to examine values of form k*a, where a is integer. But then, we can obtain the values we need via inclusion-exclusion principle, and while this isn't O(1), it's still fast enough: the number of iterations for each w will be comparable to the amount of pairs (a,b) such that a divides b and b divides w, which is still much less than w (for each p^k in the prime decomposition of w we get (k+1)*(k+2)/2 choices for a and b, which is much less than p^k for large p's).

Phew. That was long, difficult to understand, impractical, and whatever (and quite probably wrong somewhere).

And last but not least, here is the screencast from SRM 422. It might be funny when I try to Google the solution for the hard problem, and when I fix my solution in last minutes by adding many random tries until a second runs out :)

Streaming: http://77.41.63.3/topcoder/screencast/srm422/stream.html. I believe you need at least Flash 9 to view that.
Download: http://77.41.63.3/topcoder/screencast/srm422/srm422.avi or http://77.41.63.3/topcoder/screencast/srm422/srm422.mov or (when my server goes down)http://narod.ru/disk/3267201000/srm422.avi.html (this is in Russian, but the only thing you need to do is to solve the CAPTCHA just above the green button, and then press the green button; there might also be a checkbox on Firefox and IE asking to install a toolbar - uncheck it). The file is about 50M.

If you have problems viewing, please tell - I'm still trying to figure out the right setup for this.

Wednesday, October 22, 2008

XXI St. Petersburg State University Championship - problem C

I've participated in a contest this weekend in St. Petersburg, and I'd like to explain (at least) one problem from that contest - I think it may be interesting even for people not very much into programming competitions.

The problem is: how many parallelograms are there with vertices having integer coordinates, x from 0 to n, y from 0 to m (both inclusive)? n and m are given as input, and both don't exceed 2000 (so an O(n^2) algorithm is almost surely OK, while O(n^3) or more is almost surely too much).

This is kind of classical problem if we replace parallelograms with triangles. But it turns out that parallelograms allow for some more interesting solutions. There's an approach that is the same for both parallelograms and triangles. What's the problem with counting those parallelograms directly? Well, there may be too much - on the order of n^6 (the parallelogram is basically defined by 3 of its vertices) - so we need to somehow count many at once. How can we do that? The most obvious way would be to count parallelograms that differ only by a shift together. But even that would only get us n^4 variants. To do better, we need to somehow group different parallelograms together. The most straightforward way to do this is to count all parralelograms that have the same bounding box together. Why? Because all those parallelograms have the same amount of possible shifts (if the bounding box is [x1,x2]*[y1,y2], then we have (n-x2+x1+1)*(m-y2+y1+1) possible shifts), so the answer for the problem can be calculated as: sum (w from 0 to n) of sum (h from 0 to m) of num(0,0,w,h)*(n-w+1)*(m-h+1), where num(x1,x2,y1,y2) is the number of different parallelograms with the given bounding box. If we can get that number in O(1) , then we're done - we have a O(n^2) solution!

For a parallelogram ABCD to have the given rectangle (w times h) as a bounding box, the parallelogram must have a vertex on each of the sides of this rectangle. There're two choices:
1) one of the vertices of the parallelogram (say, A) coincides with a vertex of the rectangle. Then it's easy to see that the opposite vertex C of the parallelogram coincides with the opposite vertex of the rectangle, and the rest of the parallelogram is defined by the position of the third vertex B of the parallelogram. The number of possibilities for B is just the number of points inside the rectangle, minus the 2 points already taken, and minus the points that lie on the segment AC. And we need to divide the resulting number by 2 to account for B and D being interchangeable, and multiply it by 2 to account for two possible choices for which vertices of the rectangle A and C go to. Thus, we get: ((w+1)*(h+1)-gcd(w,h)-1)/2*2=(wh+w+h-gcd(w,h)). (The reason why gcd appears here is left to the reader)
2) Each side of the rectangle has one vertex of parralleogram strictly inside it. Then, the parallelogram is defined by two of its vertices, A and B, as C and D will be located in symmetric positions, so there're (w-1)*(h-1) choices here.

So, we'd have a O(n^2) solution if we knew the values for gcd(w,h). But, we can calculate all such gcd's in O(n^2) using Dynamic Programming and the fact that gcd(w,h)=gcd(w-h,h) for w>h. So we're done!

There's another O(n^2) solution that groups parallelograms in a different manner: we can group them by the location of their center, which always has half-integer coordinates. Given the location of the center, the parallelogram is defined by two of its vertices, and those two vertices can be placed arbitrarily inside some rectangle, minus the cases where they lie on the same line with the center.

But explaining those solutions is not the main point of this post. I'm writing this because, when I was asked recently at a TC Spotlight Session what kind of mathematical background is needed to succeed in programming competitions, my answer was "One thing that is absolutely crucial is to be able to think 'mathematically'", and I had difficulty explaining what that means. I believe this problem, and this solution, is an excellent example, and explanation, of that phrase. It's crucial to be able to do several quite easy reduction steps until your problem splits into several quite easy problems, and to carefully do the reductions without omitting any important cases (like when the gcd appeared in the above solution). To be able to 'explore' some given direction deeply, and check if it yields a solution or not (instead of just trying to apply some known ideas one by one).

Does this make sense?

And also, can you solve the above problem faster than O(n^2)?

Thursday, October 16, 2008

A short math problem instead of a long philosophical text this time! Here's a problem that was given by one of our math lecturers in the University, and I was quite impressed with its solution at that point.

Suppose f(x) is a good function (say, has n-th derivative for every n) on [0,3], f(0)=f(3)=0. Prove or disprove: int(der(f)^2)>=int(f^2), where int() means integral from 0 to 3, and der(f) is the derivative of f. Can you solve it? [right, right, I'm testing if anyone actually reads this :)]

And as a bonus, here's a random blurry picture from my recent trip to London for Google Code Jam (I've tried to find a good picture that has me on it, but it appears there isn't any). London is exciting!

Tuesday, October 14, 2008

IOI 2002: Frog

This time I'll try to remember my encounter with problem Frog from IOI 2002 (http://olympiads.win.tue.nl/ioi/ioi2002/contest/day1/frog/frog.pdf). This looks interesting to me because I remember long discussion with teammates after the contest about an efficient algorithm, we spent maybe several hours and could come up with something OK but difficult to write. Of course none of us thought he got it during the contest (but I'm not sure if some of us didn't actually get a full score because we overestimated our algorithm complexities). I was wondering if this problem will look difficult to me now, 6 years and hundreds of competitions later.

And the thing is - when I look at it now, it is painfully obvious. The authors seem to expect a N2 solution, with N equal to 5000, and give us 64MB of memory (restrictions come from http://olympiads.win.tue.nl/ioi/ioi2002/contest/day1/overview1.pdf). But hey, there's the obvious N3 solution: for every pair of marks (a,b) we check if the path that starts with a and has b as the first hit continues correctly until the outside (and that going to the left of a gets us outside immediately). To avoid counting the same path twice, we can order all marks by x-coordinate and then by y-coordinate, and only look for pairs with a less than b. It's not clear if this algo implemented with hashtables will actually hit the time bound and not get full score; but there's no need to check that, as this solution is very easily transformed to an N2 DP: we can compute the same boolean D[a,b] as above (does the path continue correctly beyond b until the end of the field) for each pair (a,b) with a less than b, but without requiring anything about what precedes a on that path (and only take that into account when constructing the answer). To compute that, we find the next mark c on the path a-b-..., and if that mark is present and the D[b,c] is true or c is outside the field, then D[a,b] is true. The only issue is to find c in constant time. This can be done with hashtables, but again, there's no need: if we traverse the values for b (with the value for a fixed) in our order (first by x-coordinate, then by y-coordinate), then we should traverse the values for c in the same order, thus only requiring O(N) operations for each a, thus getting O(N2) in total. In terms of memory this uses only 25M bools, which is compressable to just about 3M, but there's no need as we have 64M. And the code should be so clean and simple that I won't even write it here (does anyone want me to?). In other terms, I would spent at most 20-30 minutes on such problem today.

But one might say, wasn't this expected? Don't the contestants' knowledge and problem difficulty constantly raise and try to pursue one another? True. But, it amazed me how there's no (I mean, totally no) clever algorithm involved, just a simple DP with a simple optimization that reduces O(N) to amortized O(1). I can swear we used to do much more complicated DPs in high school without any trouble. Is the amortization idea so new that it was very difficult 6 years ago? I doubt that it's the only reason. Looking back, I can think of at least 2 more reasons:
  1) I expect that we were too nervous to find that at the actual IOI and after that. But the point is, we didn't feel that way! At least I remember myself being fairly confident and determined then.
  2) We were reluctant to inject the ideas that we knew into algorithms that we knew, changing them significantly. This is more of a mathematical drawback, so both the university education and solving problems in a tougher timeframe at the ICPC trainings should have eliminated that difficulty.

  Any more ideas?

Friday, October 10, 2008

Screencast and challenging

For those of you who don't follow the TopCoder forums - I've been screencasting my SRM experiences, in a hopeless assumption that people might find it interesting.

Recollection of events for the first part of the challenge phase:
00:40 into the video - open the code, read it
01:53 - spot a bug: the coder doesn't distinguish "player X will win" from "only player X or nobody can win". Start thinking about the challenge case.
02:40 - how should the challenge case work? The first player should have two options; in one of them, the situation "only player 1 or nobody can win" should arise, so that the code mistakes by thinking that person 1 has a forced win and chooses that move; but in reality, there should be another move that allows some other person to win. The problem statement lets us build practically any graph for the game, so we can choose arbitrary number of stones removed for those two moves; let's say that those moves take 1 and 2 stones, and let's say the initial amount of stones is 5 (hopefully that will be enough).
02:50 - one of the cases should make other person win; let's make it so that the second player wins instantly by taking 3 stones from 3.
02:56 - and we should have two variants (either player 1 wins or nobody wins) in the second case; so that gives us two possible moves again; not to overlap with the existing positions, let's make them 2 and 3 from 4.
02:59 - one of them should lead to "nobody wins" situation, so we leave position with 2 stones with no jumps. Position with 1 stone should have the first person win, so assuming we have only 2 players, that position should have a winning move: 1.
03:15 - so is this gonna work? No, it isn't; player 1 should still be able to win after both initial moves, otherwise even without a forced win he should choose only one of them according to the problem statement. So we should modify the "3" position to allow the first player to win. But how?
03:24 - we'll need a couple more positions to achieve this; let's insert them.
03:39 - we'll need to adjust existing jumps; but...
04:08 - we can't achieve the current goal with only two players, as we should have a position when player 1 AND some other player can win; if some other player is player 2, then he will certainly win if given the choice; thus, player 2 should choose whether player 1 or player 3 can win. But, first of all, now that we have 3 players, we should alter the "player 1 or nobody wins" position. For player 1 to win, we now need two extra moves after player 2's move. We achieve that by having position 2 have one move to position 1, and position 3 having no moves; Player 2 now has two moves, 3 and 4, from position 6 - the first one leads to a draw, and the second leads to two more moves performed and player 1 winning.
04:52 - and how do we make player 2 choose whether player 1 or player 3 wins in the other case? He should choose whether there're 1 or 2 moves left; luckily, we already have such positions, so we just create appropriate moves from position 5: move 3 that leads to position 2 and player 1 winning, and move 4 that leads to position 1 and player 3 winning.
05:10 - re-verifying the case shows that everything should work; challenge phase is short and others may challenge this as well, so let's not re-verify once again and click "OK".
05:15 - enter two more numbers required and challenge!
05:17 - successful! And the expected and actual results are what I expected them to be, so the case should be OK. Cool, let's look if the others have made the same mistake.

SRM 420:
Streaming: http://77.41.63.3/topcoder/screencast/srm420/stream.html. I believe you need at least Flash 9 to view that.
Download: http://77.41.63.3/topcoder/screencast/srm420/srm420.avi or http://77.41.63.3/topcoder/screencast/srm420/srm420.mov or (when my server goes down)http://narod.ru/disk/2954156000/srm420.avi.html (this is in Russian, but the only thing you need to do is to solve the CAPTCHA just above the green button, and then press the green button; there might also be a checkbox on Firefox and IE asking to install a toolbar - uncheck it). The file is 55M.

If you have problems viewing, please tell - I'm still trying to figure out the right setup for this.

Monday, September 22, 2008

My LJ Archive

My (mostly personal and Russian) LJ is archived at http://77.41.63.3/lj/

Saturday, May 31, 2008

My very first algorithm problem experience

Back in 1997, my Informatics teacher, Yulia Lvovna Vorontsova, has suggested that I participate in Moscow regional (Moscow consists of 10 regions: Central, Northern, Northwestern, Western, ..., Northeastern, and (guess?) Zelenograd) olympiad in informatics.

The olympiad had three problems to offer, but the only one I remember now is: given an integer N not exceeding 10000, output 2 to the power of N.

I've never solved algorithm problems before that olympiad, so everything was new and difficult to me. Luckily, I knew how to read or write files in Pascal. I've solved problems in Pascal then, but I'll use Java for explanations here.

I've thought about keeping an array of digits. This array was fixed-size, and the first element contained the most significant digit (which was always zero, of course, as the size was chosen with a bit of margin). I can reconstruct my code as (this is definitely a very inaccurate reconstruction, but I don't remember much from 11 years ago):
 1: String power(int n) {
2: int[] digits = new int[5000];
3: digits[4999] = 1;
4: for (int i = 0; i < n; ++i) {
5: int p = 0;
6: for (int j = 4999; j >= 0; --j) {
7: int r = p + 2 * digits[j];
8: p = 0;
9: while (r > 10) {
10: p += 1;
11: r -= 10;
12: }
13: digits[j] = r;
14: }
15: }
16: String s = "";
17: for (int i = 0; i < 5000; ++i)
18: if (s.length() > 0 || digits[i] > 0)
19: s += digits[i];
20: return s;
21: }
This code suffers from several common problems (like repeating 5000 and 4999 all over the place), but I'm still amazed how could I get it working in 6th grade.

The rules for that contest were standard IOI-style, that means we've had to write solutions for several problems during the contest, and after the end of contest they were checked on several testcases each, and getting correct answer on each testcase was scored by some amount of points. Of course, the checking was done manually: one of the judges sat next to you and ran your program in command line (it was DOS everywhere at that time) of Norton Commander.

And you know what? The above program passed all testcases but one. The testcases were (not exactly, but the idea should be understandable): 1, 2, 5, 10, 10000. And apparently the answer for 10000 was incorrect. Can you spot the bug in the above code?

Strange, but I remember comparing the correct and wrong answers for that testcase now. The correct answer started with 1995, and my program output something that started with 1994. How could there be such tiny error? Because r > 10 should be r >= 10. Digits of '10' are not good :)

That was my very first problem, and my very first bug. Everything did work out well in the end, as I advanced to Moscow-wide olympiad anyway, and then to Russian olympiad, and then...