Saturday, June 8, 2019

Power towers solution

A long, long time ago I have mentioned an ICPC practice session problem: given two power towers, which one evaluates to a larger value? A power tower is an expression like 2232, which is evaluated right-to-left like 2^(2^(3^2)). Each number in both power towers is an integer between 1 and 100, and the height of the tower is also between 1 and 100.

I have also mentioned that we have came up with a working approach together with Roman Elizarov and Evgeny Kapun, but never got around to actually describing this approach. For some reason, quite a few people asked me to come back to this problem recently, so here we are :)

First, we can notice that we can remove any one and everything that comes after it, so let's concentrate on the case where all numbers are between 2 and 100.

Let's compare our power towers from top to bottom, aligning them in such a way that the bottom numbers are next to each other. In other words, we will follow this process: we start either with two ones, or with a one and some power tower, then we repeatedly add a number (between 2 and 100) to the bottom of each of the towers.

The first observation is that there exists such number k that if the first tower is at least k times bigger, it will keep being at least k times bigger even if we add 2 to it and 100 to the other tower. Effectively it means that as soon as one of the towers is at least k times bigger in our top-down process, we can skip going through the remaining numbers and declare that tower as the winner.

To prove it, let's write down some formulas. We have x>=k*y, and we need to prove that 2x>=k*100y. But 2x>=2k*y=100y*(2k/100)y. Since y>=1, we just need 2k/100>=k, which is true for k=10 for example.

Intuitively it seems that most likely one of the towers will become at least 10 times bigger pretty quickly, and we won't need to go very deep. In order to check if this is the case, let's define an expand operation: given a set of numbers S, the set expand(S) is built in the following way: first, we build a set T which is obtained by uniting S with all numbers obtained by adding one more power at the bottom: T=S+{2x for x in S}+{3x for x in S}+...+{100x for x in S}. Then, we sort all numbers in T, collapse equal numbers into one, and then remove all numbers which are at least 10 times bigger than the previous number, and at least 10 times smaller than the next number. The resulting set is called expand(S). The motivation behind such definition is: if we have two numbers that are in S during our power tower comparison process, then after adding two more powers to the bottom we will either get a decision (one is at least 10 times bigger than the other), two equal numbers, or two numbers from expand(S).

Now let's start with the set S={1,2,..,100}, and repeatedly compute expand(S), expand(expand(S)), .... It turns out that the process stops very quickly, and already expand(expand(S))=expand(expand(expand(S))), and this set has just 17709 elements!

It means that if we compare two power towers from top to bottom, then we only need to handle the numbers from expand(expand(S)). If at some point one number is 10 times bigger than the other, we k now the result of the comparison. If at some point the numbers are equal, and are at least 300, then we just need to look at the next differing number going from top to bottom, since (100/99)300>10.

Now, how do we actually work with the numbers from expand(expand(S)), for example how do we actually find out that it stops growing at 17709 elements? The numbers in that set are still huge. The only way I see to approach this is to experiment, trying out different ideas until one works. In this case, two ideas were necessary:

First, we will store the logarithms of elements of our set instead of the elements themselves. It turns out that their logarithms all fit in the double floating-point type (the largest is about 3 million). However, during the expansion step we need to temporarily work with numbers as high as 100x for x from our set, and those don't fit into double.

Therefore, when expanding the set, we will work with logarithms of logarithms of numbers. For example, if we have y=log(x), then we will compare numbers of form log(log(bx))=log(x*log(b))=log(x)+log(log(b))=y+log(log(b)).

We need to be able to check two things: whether two numbers of this form are equal, and whether one is at least 10 times bigger than the other. For the equality test, we will simply check if the difference is smaller than some constant eps. When running the expansion process, we can print out the biggest difference smaller than eps, and the smallest difference bigger than eps that we encounter. For eps=1e-13, the former is 3e-15, and the latter is 4e-13. Given the more than 100 times difference between the two, it seems plausible that the former is just floating-point rounding error and the two numbers are equal, and the latter is real difference. This is not a proof, of course, but it gives enough confidence (I suspect this leap of faith could be the main reason this problem was only used for ICPC practice session, and not for the real contest).

Now we need to check if x/y>=10 when we know p=log(log(x)) and q=log(log(y)). x/y>=10 is the same as log(x)-log(y)>=log(10), which is the same as exp(p)-exp(q)>=log(10), which is the same as (exp(p-q)-1)*exp(q)>=log(10), which is the same as log(exp(p-q)-1)+q>=log(log(10)). To avoid overflow when computing exp(p-q) in this formula, we will simply check if p-q>=5 first, since in that case the inequality is definitely true.

Here is the code that we used to come up with and verify all above hypotheses:

import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;

public class PowerChains {
   static final int MAXN = 100;
    
    static class Number implements Comparable {
        double what;
        int[] origin;
        
        public Number(Number parent, double value, int extra) {
            if (extra < 0) {
                origin = parent.origin;
                what = value;
                return;
            }
            if (parent != null) {
                origin = new int[parent.origin.length + 1];
                System.arraycopy(parent.origin, 0, origin, 1, parent.origin.length);
            } else {
                origin = new int[1];
            }
            origin[0] = extra;
            what = value;
        }

        public int compareTo(Number number) {
            return Double.compare(what, number.what);
        }
        
        public String toString() {
            StringBuilder b = new StringBuilder();
            b.append(what);
            for (int x : origin) b.append(" " + x);
            return b.toString();
        }
    }

   public static void main(String[] args) {
      Number[] interesting = new Number[MAXN];
      for (int i = 0; i < MAXN; ++i) {
         interesting[i] = new Number(null, Math.log(i + 1), i + 1);
      }
      while (true) {
         Number[] pows = new Number[MAXN * interesting.length];
         for (int i = 1; i < interesting.length; ++i) {
            pows[i] = new Number(interesting[i], Math.log(interesting[i].what), -1);
         }
         pows[0] = pows[1];
         for (int b = 2; b <= MAXN; ++b) {
            double logb = Math.log(Math.log(b));
            for (int i = 0; i < interesting.length; ++i) {
               pows[(b - 1) * interesting.length + i] =
                     new Number(interesting[i], interesting[i].what + logb, b);
            }
         }
         Arrays.sort(pows);
         double maxDiff = 0.0;
         double minDiff = 1e100;
         double maxBase = 0.0;
         double needDiff = Math.log(10);
         List newInteresting = new ArrayList();
         newInteresting.add(new Number(null, 0.0, 1));
         boolean wasBig = true;
         for (int i = 0; i + 1 < pows.length; ++i) {
            double diff = (pows[i + 1].what - pows[i].what) / (pows[i + 1].what);
            if (Math.abs(diff) < 1e-13) {
                    if (diff > maxDiff) {
                        maxDiff = diff;
                    }
            } else {
                    if (diff < minDiff) {
                        minDiff = diff;
                    }
               double a = pows[i].what;
               double b = pows[i + 1].what;
               boolean big;
               if (b - a > 5)
                  big = true;
               else {
                  double by = Math.exp(b - a) - 1;
                  big = a + Math.log(by) > Math.log(needDiff);
               }
               if (!big) {
                  if (wasBig)
                     newInteresting.add(new Number(pows[i], Math.exp(pows[i].what), -1));
                  newInteresting.add(new Number(pows[i + 1], Math.exp(pows[i + 1].what), -1));
                  maxBase = Math.max(maxBase, pows[i + 1].what);
                  wasBig = false;
               } else {
                  wasBig = true;
               }
            }
         }
         System.out.println(newInteresting.size() + " " + maxDiff + " " + Math.exp(maxBase) + " " + minDiff);
         if (newInteresting.size() == interesting.length) break;
         interesting = new Number[newInteresting.size()];
         for (int i = 0; i < interesting.length; ++i) {
            interesting[i] = newInteresting.get(i);
         }
      }
   }
}

Thanks for reading, and check back for more!

Monday, May 13, 2019

A fastest week

A TopCoder SRM was also the first event of the Apr 22 - Apr 28 week, more precisely SRM 756 (problems, results, top 5 on the left, analysis). Stonefeang was not the fastest on any problem, but very fast on all three, and therefore held a commanding coding phase lead that he defended during the challenges. Well done!

The next Open Cup 2018-19 round, the Grand Prix of Minsk, took place on Sunday (top 5 on the left). Many teams solved everything this time, but team Past Glory did this under 3 hours, and therefore earned a clear first place. Congratulations!

Finally, Google Code Jam 2018 Round 1B wrapped up the week (problems, results, top 5 on the left). Benq was a lot faster than everybody else this time, and even one incorrect attempt did not really put his first place in doubt. Great performance!

Thanks for reading, and check back for more.

A forethought week

The Apr 15 - Apr 21 week was quite busy. TopCoder SRM 755 kicked things off on Monday (problems, results, top 5 on the left, analysis). The problems were quite straightforward, therefore the coding speed and the challenge phase came to the forefront. Tourist did very well on both, and got a well-deserved first place. Congratulations!

TopCoder hosted a second round this week, the TCO 2019 Round 1A (problems, results, top 5 on the left, analysis). Most high-rated contestants got a bye to Round 2, therefore it was a chance to see some new names on the scoreboard. Congratulations to mrho888 on the great challenge phase performance and the win!

Codeforces ran the Elimination Round for the Forethought Future Cup right after the TCO round (problems, results, top 5 on the left, analysis). There were eight problems and therefore the round was slightly longer at 2:30, but this did not stop tourist from solving everything in just over an hour. Well done!

Finally, the Open Cup 2018-19 returned with the Grand Prix of Baltic Sea on Sunday (results, top 5 on the left). The team Gifted Infants solved 10 problems in just under three hours and nothing in the remaining two, and yet nobody could catch them. Congratulations on the victory!

Thanks for reading, and check back for more!

A postponed week

The Apr 8 - Apr 14 week was light on contests from the usual suspects, but Google Code Jam Round 1A filled the void (problems, results, top 5 on the left). Gennady.Korotkevich required just 26 minutes to wrap things up, with others following in approximately 5-minute increments. Congratulations to everybody who made it to Round 2!

In my previous summary, I have mentioned two problems. The first once came from ICPC World Finals: you are given a street with n<=500 traffic lights on it arranged from left to right. The i-th traffic light stays red for ri seconds, then green for gi seconds, then repeats this cycle infinitely. The cycles for all traffic lights start at the same time 0, ri and gi are integers, and ri+gi does not exceed 100. Suppose you approach this street from the left at a random moment in time, and drive until you encounter a red light for the first time, or until you pass the entire street. What is the probability that you see red for the first time on 1st, 2nd, ..., n-th traffic light?

The solutions gives a whole new meaning to the "meet in the middle" term. First of all, suppose all periods ri+gi are relatively prime. In this case seeing red on each of the traffic lights are independent random events, and therefore our answers are r1/(r1+g1), g1/(r1+g1)*r2/(r2+g2), ...

Moreover, only slightly more complex solution exists in case any two periods are either relatively prime, or one divides the other. In each group of periods where one divides the other we'll pick the largest one, and go from left to right which units of time modulo that largest period have already seen red, and the probabilities for different groups can still be simply multiplied since they're independent.

However, we have arbitrary periods up to 100 in this problem, so the above condition does not hold. In order to overcome this, let's split the time into m parts: first part will have units 0, m, 2m, ..., the second part will have units 1, m+1, 2m+1, ..., and so on. Each of the parts can be "compressed" m times to obtain an instance of the same problem, but a period of p gets replaced by a period of p/gcd(p,m).

The only remaining task is to find which value of m will guarantee that for any two numbers p and q up to 100, the numbers p'=p/gcd(p,m) and q'=q/gcd(p,m) are always relatively prime or one divides the other. We can write a small program for that, and it turns out that m can be as small as 2520=23*32*5*7.

More generally, if the numbers were up to x instead of 100, m could be equal to the product of largest powers of primes not exceeding sqrt(x). For p' and q' to have a non-trivial greatest common divisor means for them to have two different primes in their factorization, which would mean that original numbers p and q were a product of two prime powers each exceeding sqrt(x), which is a contradiction.

We have essentially decomposed the problem m times from the top, and that allowed to split it into independent problems of size at most x at the bottom, hence the "meet in the middle" mention above. The running time of this solution is O(n*m*x), which is fast enough.

The second problem I mentioned came from Codeforces: you have n game units, each capable of causing 1 unit of damage per second. You need to merge them into m bigger game units, and if ai units were merged into the i-th bigger unit, it will cause ai units of damage per second. The m bigger units will then attack an enemy army also consisting of m units, with the j-th enemy unit having bj hit points. The attack will happen second-by-second, and in each second each of your big units attacks one enemy unit. It is allowed for multiple units to attack one enemy unit in one second. What is the fastest way to decrease the hit points of all enemy units to zero or negative? You can choose the way you merge the n units into m big units, but you can do it only once, before all attacks.

The key idea in solving this problem is to postpone the choice of ai. Let's imagine we start with a single big unit of size n. Let it keep attacking the first enemy unit, until at some second its hit points drop to zero or below. If they dropped below zero, it means that we have wasted some attacking power; to avoid that, let us split this big unit into two now, one that can exactly finish the first enemy unit, and the other can now attack the second enemy unit during that second. In the following seconds, both units would keep attacking the second enemy unit, and when its hit points drop below zero, we will split one of our units again to avoid wasting attacking power.

We will end up with at most m+1 units after all splitting is done, since we split once per enemy unit, but we can notice that the last split was not necessary, so we can skip doing it and end up with m units as required. And we have clearly spent the smallest possible time, since we never wasted a single unit of damage during all seconds except the last one.

In some sense, the main obstacle in solving this problem is not realizing that it is actually easy :)

Thanks for reading, and check back for more!

Sunday, May 12, 2019

A double Moscow week

The first week of April was of course focused on ICPC 2019 World Finals (problems, results, top 12 on the left, official broadcast, our stream, analysis). It feels strange to write a short summary given the amount of coverage available for the contest, but I'll do it anyway :) Warsaw took a huge early lead, but almost stopped after two hours, spending eternity on the technically tricky problem C and still not solving it. The three pre-tournament favorites Moscow, MIT and Tokyo overtook them, but only Moscow could solve all 10 solvable problems. Congratulations to the winners and to all medalists!

Problem K was the most exciting for me, even though I didn't actually solve it myself. It went like this: you are given a street with n<=500 traffic lights on it arranged from left to right. The i-th traffic light stays red for ri seconds, then green for gi seconds, then repeats this cycle infinitely. The cycles for all traffic lights start at the same time 0, ri and gi are integers, and ri+gi does not exceed 100. Suppose you approach this street from the left at a random moment in time, and drive until you encounter a red light for the first time, or until you pass the entire street. What is the probability that you see red for the first time on 1st, 2nd, ..., n-th traffic light?

Codeforces Global Round 2 took place on Saturday, when most of the teams were already back home or on the way there (problems, results, top 5 on the left, analysis). ICPC gold medalist ecnerwala got enough points for the first place in less than an hour, even though our entire streaming team was trying to catch him as you can see :) Well done!

Problem G in this round was very nice. You have n game units, each capable of causing 1 unit of damage per second. You need to merge them into m bigger game units, and if ai units were merged into the i-th bigger unit, it will cause ai units of damage per second. The m bigger units will then attack an enemy army also consisting of m units, with the j-th enemy unit having bj hit points. The attack will happen second-by-second, and in each second each of your big units attacks one enemy unit. It is allowed for multiple units to attack one enemy unit in one second. What is the fastest way to decrease the hit points of all enemy units to zero or negative? You can choose the way you merge the n units into m big units, but you can do it only once, before all attacks.

Thanks for reading, and check back for more!

Sunday, May 5, 2019

An order statistic week

The weekend of the Mar 25 - Mar 31 week was already the time to travel to Porto for ICPC World Finals for participants, but it still had a couple of rounds. TopCoder SRM 754 was the first one (problems, results, top 5 on the left, analysis). tourist's solution for the medium was challenged, which doesn't happen very often, but he more than made up for that by being the only contestant to solve the hard. Congratulations on the win!

Codeforces Round 549 followed a few hours later (problems, results, top 5 on the left, analysis). It was Um_nik's turn to dominate, being the only one to solve all problems but already leading the field with a sizable margin after solving 4 out of 5. Well done!

In the previous summary, I have mentioned an AtCoder problem: we pick n independent uniformly random points on a circle, and then find two points the (circular/angular) distance between which is as close to 1/3 of the circle as possible. What is the expected value of the difference between that distance, expressed as a fraction of the circle, and 1/3? n is up to 106, and you need to compute the answer using division modulo 109+7.

The official editorial has a similar but not exactly the same explanation that does not require any googling to solve the problem. However, I'd like to share my approach as well.

Let's collapse the circle 3 times, in other words project each point between 1/3 and 2/3 to x-1/3, and each point between 2/3 and 1 to x-2/3. The original question of finding two points with distance as close to 1/3 as possible almost translates into finding two closest points on this projection: if two points x and y are close, then these two points are either also close on the original circle (with probability 1/3), or the distance between them on the original circle is close to 1/3 (with probability 2/3), since 2/3 is also 1/3 when viewed from another angle. Moreover, these probabilities of 1/3 are almost independent for pairs of consecutive points: it's impossible that all pairs of consecutive points are close on original circle as well, but apart from that the probability that any k<n consecutive pairs each are close on the original circle as well is equal to 1/3k.

Now, in order to find the expected value of the difference between the sought distance and 1/3, define f(x) as the probability that this difference is >=x, then our answer is the integral of f(x).

Consider g(x,k) to be the probability that the k-th smallest distance (in sorted order, the so-called order statistic) between consecutive points from n uniformly random ones being >=x. Then we can find the probability that exactly k distances between consecutive points are less than x as g(x,k+1)-g(x,k) (we define g(x,0)=0 here). If the points are picked on a circle of size 1/3 instead of the unit circle, that probability is equal to g(3x,k+1)-g(3x,k). Finally, if we have k such distances on the 1/3 circle less than x, then in order to have our answer >=x, we need all of those small distances to be "also close" on the original circle, the probability of which is 1/3k, therefore we have established that f(x)=sumk((g(3x,k+1)-g(3x,k))/3k).

Since we're interested in the integral of f(x), we can swap summation and integration in the above formula, and learn that we're interested in sumk((int(g(3x,k+1))-int(g(3x,k)))/3k)=sumk((int(g(x,k+1))-int(g(x,k)))/3k+1).

Finally, int(g(x,k)) is simply the expected value of the k-th order statistic of distances between consecutive points when n uniformly random points are chosen on a circle. It turns out it's possible to google it: int(g(x,k))=1/n*(1/(n-k+1)+1/(n-k+2)+....+1/n).

Most of the terms cancel out, and we get int(g(x,k+1))-int(g(x,k))=1/(n*(n-k)), and int(f(x))=sumk(1/(n*(n-k)*3k+1)), where the summation is done over all values of k from 0 to n-1, which solves our problem, and the actual code we submit just computes this sum.

As I was coming up with this solution, I did not just invent it from top to bottom. Instead, I was digging from both ends: first, I've noticed that the setting is similar but not exactly the same as just finding the smallest distance between n random points on a circle of size 1/3. Then, I've googled that such distance has a known closed-form expression, and the same is true for k-th order statistic. Then, I've noticed that knowing how the k-th order statistic behaves allows us to split the probability space into areas where we just need to multiply by 1/3k. Finally, I've put the pieces together and verified that swapping summation and integration checks out, implemented the solution and got wrong answer for all samples: I forgot the extra 1/3 that comes from the fact that our circle is of size 1/3 instead of 1, and thus had 3k instead of 3k+1 in the formula. The contest time was running out, and I've almost given up hope, but still started to make small changes to the code expecting that I have some issue in that vein, 3k-1 didn't help but 3k+1 did, and I very happily submitted with just 1 minute left in the round :)

Thanks for reading, and check back for more!

Thursday, May 2, 2019

A close to third week

TopCoder SRM 753 opened the Mar 18 - Mar 24 week in competitive programming (problems, results, top 5 on the left, analysis). Just ecnerwal and mnbvmar solved all three problems, and ecnerwal was so much faster that the challenge phase did not really matter. Congratulations on the victory!

AtCoder Grand Contest 032 took place on Saturday (problems, results, top 5 on the left, analysis). ksun48 solved problem F in just 30 minutes, and tourist solved problem E in just 15 minutes, but nobody was able to solve both during the round. Therefore the advantage went to contestants who went for F, and ksun48 was by far the fastest of those. Well done!

The hardest problem F had a very simple-looking statement. We pick n independent uniformly random points on a circle, and then find two points the (circular/angular) distance between which is as close to 1/3 of the circle as possible. What is the expected value of the difference between that distance, expressed as a fraction of the circle, and 1/3? n is up to 106, and you need to compute the answer using division modulo 109+7.

Open Cup 2018-19 Grand Prix of Moscow was the last Open Cup round before the ICPC 2019 World Finals (results, top 5 on the left). Moscow SU team was setting the problems, but many other top ICPC teams competed. Three future gold medalists appear on the screenshot on the left :) Team Past Glory was head and shoulders above everybody, being the only team to finish all 13 problems, and doing it in just 3:40. Amazing dominance!

Thanks for reading, and check back for more.