Sunday, September 26, 2021

A 202 week

Facebook Hacker Cup 2021 Round 2 narrowed the field to 500 qualifiers on Saturday (problems, results, top 5 on the left, analysis). maroonrk was a lot faster than everybody else, and he also bravely skipped writing a dedicated solution for C1, submitting problem C2 first and then quickly adapting its solution for C1. Congratulations on the clear first place!

This contest ended up being quite a struggle for me. First, I did run into the stack overflow issues that are a recurring topic for the Hacker Cup. It seems that I have changed my setup compared to the last year, and now even about 15 minutes of googling during the round and another half an hour after the round did not help me find a way to increase the stack size in the new setup. In the end I just gave up and submitted by compiling with g++ in the command line (in hindsight, this decision should have come earlier).

However, I'm still curious how to increase the stack size in my setup, which is CLion that uses WSL as the toolchain, compiling with clang++. I've tried various compiler/linker flags such as -Wl,-stack_size and  -Wl,--stack, but none of them worked. I've tried editing /etc/security/limits.conf inside WSL, but that did not seem to have any effect. I've found answers suggesting that the stack size can only be changed in WSL 2, and I double-checked that I do in fact have WSL 2. I've tried calling setrlimit from within my code, but it did not help either. I've found mentions that calling sudo prlimit --stack=unlimited --pid $$; ulimit -s unlimited would help, but it's not clear how to do it when I'm executing my program from within CLion, as I could not find a way to specify that some commands need to be run before my program. Putting that command in .profile did not help.

So, I'm turning to the readers of my blog for help. Do you know how to increase the stack size in the CLion+WSL+clang setup?

EDIT: this has been resolved in comments, it turns out that I did not double-check enough and I had WSL1. With WSL2 /etc/security/limits.conf does the trick.

This was not the end of my fight with C++, however. After implementing the solution for C2, I've tried to check it before submitting by feeding it the input from C1 augmented with many random changes. I've noticed that it did agree with my solution for C1 in debug mode, but it produced a different answer in release mode! I went on to debug this for around half an hour, could not find any issues, and finally noticed that compiling from the command line with g++ with optimization on (the release mode mentioned above was compiling with clang++ from within CLion) produces the correct answers, so I've used that to make my submission which turned out to be correct.

After the contest I discovered what the problem was using binary search on clang++ flags: in release mode, CLion (which inherits this behavior from CMake as I understand) passes -DNDEBUG flag, which makes assert(f()) statements not execute f() at all, and I've used assertions to verify that the elements that I try to erase from sets are actually found: assert(set.erase(x)). I guess everyone has to learn this lesson once :)

Finally, I was quite excited to come up with a cool solution for problem D inspired by the algorithm described in this post from 9 years ago (see the second solution to problem F). However, I missed the fact that even though I only discarded at most 18 numbers in my solution, some of those discarded numbers are differences of, or differences of differences of original numbers (and so on), so we can discard much more than 18 original numbers. Therefore my solution has failed, but luckily the first four problems were enough to advance. Funnily enough, if I'm reading the feedback page correctly, this solution has almost passed, only failing one testcase where it discarded 31 numbers (it was only allowed to discard 25).

Open Cup 2021-22 Grand Prix of XiAn followed on Sunday (results, top 5 on the left, analysis). Team USA1 has returned to the winning ways, even though they were not the first team to reach 10 problems: Ildar Gainullin did this 20 minutes earlier, but his penalty time was ruined by the +24 in problem I. Congratulations to both teams!

In my previous summary, I have mentioned an Open Cup problem: you are given a positive integer k up to 106, and need to produce any set of at most 30 integers with absolute values not exceeding 1016 such that the number of its subsets with zero sum, including the empty subset, is exactly k.

Since we did not have anything else to code, I got to play with this problem using a computer during the round. First, I've tried a greedy approach where we start with a set containing just the number 1, and then we keep adding numbers that give us as many additional zero-sum subsets as possible (but not exceeding k). This approach found a solution of size ~100 for large values of k if I remember correctly. When looking at the solution, I've noticed that after starting with a few 1s and -1s, it starts using bigger and bigger numbers, which, while locally optimal, means that on later steps we have many different sums, and therefore not so many subsets with the same sum.

I've tried to improve the solution by prescribing it to use more 1s and -1s before falling back to the greedy approach, which brought the solution size to ~70, still quite far from the required 30.

Then I've tried to think how a more constructive solution to this problem look like. One idea that came to mind is to split the solution into two parts: in the first part, we use only positive numbers, and therefore do not get any zero-sum subsets. Then, we use only negative numbers, and we have some kind of a knapsack problem: for each negative number, we know the number of ways to build its negation as the sum of some positive numbers, and we need those numbers to add up to k. It's not exactly a knapsack though since there might also be zero-sum subsets with more than one negative number.

However, this line of thought allowed me to come back to my greedy approach with a new idea: instead of starting with 1s ands -1s, I've tried starting with 1s and 2s first (just iterating how many of each we have, up to a total of 30), and then greedily adding only negative numbers, each time maximizing the number of new zero-sum subsets, but not exceeding k. This has dramatically improved its performance, finding solutions with less than 30 numbers for k=106 and k=106-1. I'm not sure if this rationalization is correct, but I think the reason for such improvement might be the fact that now we do the "exponential blowup" only once instead of twice (using only positive numbers for it).

This solution did not pass, but it passed quite a few tests with 1000 cases in each one, so I've modified it to also try having some 3s, which was enough :)

Thanks for reading, and check back next week!

Saturday, September 25, 2021

An unwrapping week

TopCoder SRM 813 took place last Friday (problems, results, top 5 on the left). With this victory tourist has guaranteed himself the TCO qualification spot, so that he does not have to wake up for the last SRM which takes place in the middle of the night — well done! I have grossly overestimated the chances of other solutions being broken, so my challenges for solutuons that did not "look right" brought me -75 points, and that despite one of the challenges being successful :)

The second stage of the new Open Cup season, the Grand Prix of IMO, took place on Sunday (results, top 5 on the left, analysis). Team USA1 continued their streak with the 21st top-3 finish in a row, spanning from the end of the 2019-20 season, but they could not get their fifth victory in a row since team ext71 was 5 penalty minutes faster after getting their 12th problem dramatically at 4:58 from +3. Congratulations to both teams!

I've had some fun digging through the constructive problem K: you are given a positive integer k up to 106, and need to produce any set of at most 30 integers with absolute values not exceeding 1016 such that the number of its subsets with zero sum, including the empty subset, is exactly k. I doubt it's possible to solve this problem completely in one's head or on paper, so if you're interested, I encourage you to try some idea out with a computer :)

Finaly, CodeChef September Cook-Off wrapped up the week (problems, results, top 5 on the left, analysis). The two last problems were a lot more difficult than the rest, and only gennady.korotkevich was able to solve them both. Great job!

In my previous summary, I have mentioned a VK Cup problem: you have a tape that is infinite in both directions and is split into cells. You're going to write a given random permutation of size n<=15000 onto this tape in the following manner: you write the first number somewhere, then you either stay in the same cell or move one cell to the left or to the right, then write the second number there, then again either stay or move to an adjacent cell, write the third number there, and so on. Whenever you write a number into a cell that already has a number, the old number is overwritten and therefore discarded. After all n numbers have been written, we look at the resulting sequence written on the tape from left to right. What is the maximum possible size of an increasing subsequence of this sequence? Note that the given permutation is guaranteed to have been picked uniformly at random from all permutations of size n.

The first idea is relatively standard for problems of this type: overwriting is hard to deal with, so let us look at the process from the end. Then instead of overwriting we will just skip writing numbers to cells that already have a number, and therefore we will always have a segment of cells that have numbers that will not change anymore, and the rest of the cells free.

The second thing to get out of the way is the "increasing subsequence" complication. In the new setup where we go from the end, it's clear that there's no point at all writing numbers that will not end up in that increasing subsequence, as we can simply stay in the current position and skip writing the unnecessary number instead of moving left/right to write it. The only exception to this is the very first number we write (in reverse order, so the last number of the input permutation) which will always be written even if it's useless for the increasing subsequence. We can deal with this peculiar property of the last number separately, so for the remainder of this post I will forget about it for simplicity.

Now all numbers we write must form an increasing sequence. The property "increasing" is local, so all subsequent numbers will only be compared with the first or the last number written so far, which leads us to a dynamic programming solution. Having processed some prefix of the reversed permutation, only four things matter:

  1. what is the first number of the sequence we have so far
  2. what is the last number of the sequence we have so far
  3. what is the length of the sequence we have so far
  4. where in this sequence are we
However, these four things plus the length of the prefix we have processed lead to O(n5) states, so while polynomial, this solution does not really cut it for n<=15000. So we need a few tricks to reduce the state space.

The first trick is to notice that since the permutation is random, the length of the sequence will be O(n0.5) instead of O(n) (more info). Therefore our position inside the sequence also has just O(n0.5) options, so we have reduced the number of states to O(n4) just by staring at the solution for a bit and without changing it :)

The next relatively standard trick is to notice that we don't need the first two dimensions in such detail. Instead, we can store the smallest last number for each combination of (first number, length, our position), which brings us down to O(n3) states. Symmetrically, we could have also stored the biggest first number for each combination of (last number, length, our position).

Here comes another optimization: if we only consider states right after we have written a new number, as opposed to at all moments, then three dimensions of our dynamic programming collapse into one! For example, if we have just written a new first number, then "where in the sequence are we" has just one option — at that first number. Moreover, the prefix of the reversed permutation that we have processed is also fixed — it's the prefix up to that first number. This optimization without the previous optimization would have brought the number of states down to O(n2.5).

Can we apply both optimizations together? This is what I could not figure out during the contest, but in retrospect it's quite straightforward: yes, we can! The key thing to notice is that we can't apply just "smallest last number" or just "biggest first number" optimizations by themselves. We must use both: if we have just written a new first number, then we need to store the smallest last number for this (first number, length) pair, and if we have just written a new last number, then we need to store the biggest first number for this (last number, length) pair! This brings the number of states down to just O(n1.5), finally something manageable for n<=15000.

However, having reduced the number of states we have increased the number of transitions: instead of just considering what to do with the next number for a transition, we have to iterate over which will be the next number to be written, leading to O(n) transitions for each state and O(n2.5) running time.

However, we can use yet another relatively standard trick: instead of naively doing all transitions, we can use a segment tree that stores all "pending" transitions, and be able to find the dynamic programming value for a new state in O(log(n)), solving our problem in O(n1.5log(n)), which is fast enough. I hope you enjoyed this long process of unwrapping the present as much as I did :)

Thanks for reading, and check back for this week's summary!