## Sunday, January 27, 2019

### A Galois week

Codeforces returned this week with its Round 534 on Tuesday (problems, results, top 5 on the left, analysis). mnbvmar won not just thanks to being the only contestant to solve the hardest problem E, but also thanks to being much faster than his peers on the first three problems, which might've been the key to unlocking the strategy of solving E instead of D. Well done!

TopCoder SRM 748 on Saturday wrapped up the second stage of the race to TCO19 (problems, results, top 5 on the left, my screencast, analysis). tourist was the only one able to solve the hard problem, but he also had the fastest time on both the easy and the medium. Congratulations on the very convincing victory!

The hard problem has introduced me to the concept of nim multiplication which, on one side, allows solving products of coin turning games, and on the other side, together with the nim addition — bitwise xor — induces a finite field of characteristic 2 over the nimbers less than 22n for any n. I find this sudden appearance of finite fields exceedingly beautiful :)

The Wikipedia article implies that for computing the nim multiplication, we just need the following two facts:

1. The nim product of a Fermat power of two (numbers of the form 22n) with a smaller number is equal to their ordinary product;
2. The nim square of a Fermat power of two x is equal to 3x/2 as evaluated under the ordinary multiplication of natural numbers.

It took me quite some time to understand how to compute the nim multiplication using those facts, but now I think I got it, so I'll try to explain (also, does anybody have a link to where this is explained nicely? I could not find one).

First, suppose we know the nim products of powers of two. Then we can use the fact that we have a field to compute all other products by representing the multipliers as xors (nim-sums) of powers of two, for example (where all additions and multiplications are the nim ones): 3*6=(2+1)*(4+2)=2*4+2*2+1*4+1*2=8+3+4+2=8+4+1=13.

Now, the above rules allow to multiply Fermat powers of two, but we need to learn to multiply arbitrary powers of two. Here we once again use the binary representation, this time of the exponent, to represent any power of two as a (both nim and ordinary) product of Fermat powers of two, and then rely on associativity of multiplication to rearrange the Fermat powers of two in sorted order. If they're all distinct, then we can go from smallest to largest to learn that their product is equal to their ordinary product according to the first rule above; and if a Fermat power is repeated, then we use the second rule above to effectively decompose the problem into two. If we always apply the second rule to the smallest repeated power, I think we never end up with more than two occurrences of any power in our intermediate computations.

Here's an example: 2048*128=(256*4*2)*(16*4*2)=2*2*4*4*16*256=(1+2)*4*4*16*256=4*4*16*256+2*4*4*16*256=(2+4)*16*256+2*(2+4)*16*256=2*16*256+4*16*256+2*2*16*256+2*4*16*256=2*16*256+4*16*256+(1+2)*16*256+2*4*16*256=2*16*256+4*16*256+16*256+2*16*256+2*4*16*256=4*16*256+16*256+2*4*16*256=16384+4096+32768=53248.

Those two ideas can be combined to obtain a single algorithm as described in the exercise 4 at the bottom of page 14 in this article from 1978.

Thanks for reading, and check back next week!

## Sunday, January 20, 2019

### An anti-library week

TopCoder held two SRMs this week. SRM 746 took place on Tuesday (problems, results, top 5 on the left, my screencast, analysis). Once again the topic of floating-point precision took center stage: 27 solutions were submitted for the hard problem, many relying on floating-point and/or ternary search, but only 5 passed, all computing the answer exactly and using either arbitrary-length integers, arbitrary-length floating point, or int128.

I've already made this point in the Codeforces discussion thread, but let me repeat here: the super high precision requirement had a side effect of essentially penalizing the contestants that had a prewritten 3D geometry library: using the library as-is would fail because of precision issues, and adapting a set of interconnected methods from floating-point to big integers is actually much more time-consuming than figuring out the relatively simple required computation from scratch and implementing it. Looking at the five accepted solutions, four or maybe all five seem to be written from scratch.

This has also provided for an unusual challenge phase, with 9 out of 26 successful challenges coming on the hardest problem. In most cases ending up in room 1 is a bad sign for the challenge phase (I believe the rooms are sorted by decreasing average rating, or something like that), but here it was exactly the opposite :)

SRM 747 followed on Saturday (problems, results, top 5 on the left, my screencast). The relatively standard medium and hard meant that the first place was decided during the challenge phase, and the easy problem seemed to have been designed specifically to make the challenge phase more interesting: it was a constructive problem with such loose requirements that a lot of solutions worked, and even more solutions passed the sample cases. In fact, one could almost say that it was possible to pass the samples by accident :) However, challenging those solutions was also not trivial, as they might pass a challenge case by accident just as well.

Thanks for reading, and check back next week!

## Sunday, January 13, 2019

### A Dilworth week

This week was light on contests, so let me talk about the Codeforces problems I've mentioned in last week's summary. The first one was: you are given a permutation of size n <= 100000, and you need to split it into subsequences such that each subsequence is either monotonically decreasing or monotonically increasing. The number of subsequences needs to be small, but does not need to be minimal. More precisely, your solution should have the optimal worst case performance for any given n: the number of subsequences should not exceed the maximum number of subsequences necessary for any permutation of size n.

One fact that definitely looks helpful is the Dilworth's theorem: it tells us that we either have a long increasing subsequence, or can split our sequence into few decreasing subsequences. More formally, let's define f(n) to be the maximum number of subsequences our solution produces for any permutation of size n. Applying the Dilworth's theorem allows to build a solution with the following recurrence on f():

f(n)=maxk(min(k,1+f(n-k))

where the inner minimum corresponds to the fact that we can either split everything into k decreasing subsequences, or remove an increasing subsequence of length k and apply our solution recursively.

Printing a few first values of f(n) computed the recurrence above, we get:

f(1) = 1
f(2) = 1
f(3) = 2
f(4) = 2
f(5) = 2
f(6) = 3
f(7) = 3
f(8) = 3
f(9) = 3
f(10) = 4
f(11) = 4
f(12) = 4
f(13) = 4
f(14) = 4
f(15) = 5
f(16) = 5
...

Here we can notice that the smallest value of n such that f(n)=k is the k-th triangular number: 1+2+...+k=k*(k+1)/2. Having noticed it, it's relatively easy to prove it by induction.

So we have a solution using Dilworth's theorem, but is it good enough for the problem? Does it have the same worst case performance as the best possible solution for any given n?

The answer is yes, and the triangular numbers together with the recurrence above give us a hint how to see it. We need to come up with some permutation of size 1+2+...+k for which we can prove that it can't be split into less than k increasing or decreasing subsequences.

Here is one such sequence for k=5: 1, 3, 2, 6, 5, 4, 10, 9, 8, 7, 15, 14, 13, 12, 11.  In other words, we take a decreasing segment of length 1, then append a decreasing segment of length 2 with all numbers bigger than the previous segment, then append a decreasing segment of length 3 with all numbers bigger than the previous segment, and so on until a segment of length k. Consider any split of this sequence into increasing and decreasing subsequences. Suppose it has x increasing subsequences. Each increasing subsequence covers at most one element of each decreasing segment, so for k-x segments with more than x elements at least one number will be left uncovered. But a decreasing subsequence can't have numbers from two different segments, so we will need at least k-x decreasing subsequences to cover the rest, and the total will be at least k.

This proves that our solution does in fact have the optimal worst case performance for any given n. One small thing that the Wikipedia article doesn't give is a constructive way to apply the theorem: finding the longest increasing subsequence is a well-known algorithm, but how do we actually split the sequence into as many decreasing subsequences quickly? Some quick googling helped me find the answer during the contest: in the longest increasing subsequence algorithm, let's call the length of the longest increasing subsequence ending in each element the level of this element. It turns out that the elements of the same level always form a non-increasing (and since we have a permutation, decreasing) subsequence, as otherwise the element to the right would have a higher level, so we just split by level.

The second problem I mentioned turned out to be a repeat from 9 years ago: you are given a nxm matrix such that n*m<=300000, with one character from the set A, C, G or T in each cell. You need to change as few characters as possible (to other characters from the set) in such a way that each 2x2 submatrix has all four different characters.

Given that there are consequently two editorials for it, I don't think I can add much :)

Thanks for reading, and check back next week!

## Sunday, January 6, 2019

Codeforces did not take any breaks, and held two contests in the first week of 2019. The first one was appropriately named "Hello 2019" (problems, results, top 5 on the left, my screencast, analysis). V--o_o--V took a gamble and started with problem G which looks doable on the surface but takes quite a lot of time to get right. This was not optimal in terms of the number of points, but it or the five incorrect attempts did not matter in the end as nobody else was able to solve all problems. Congratulations to V--o_o--V!

I was actually quite close to getting all problems, as I've fixed the last bug in my solution for H just 30 seconds after the end of the contest (you can still see it on the screencast). And given the unexpectedly low point total of V--o_o--V, that would be enough for the first place :)

Problem E mostly relied on a well-known theorem, but had an interesting twist on top of it: you are given a permutation of size n <= 100000, and you need to split it into subsequences such that each subsequence is either monotonically decreasing or monotonically increasing. The number of subsequences needs to be small, but does not need to be minimal. More precisely, your solution should have the optimal worst case performance for any given n: the number of subsequences should not exceed the maximum number of subsequences necessary for any permutation of size n.

Codeforces Round 530 took place one day later (problems, results, top 5 on the left, my screencast, analysis so far partially in Russian). Problem E required a lot of careful implementation, and the speed of that implementation was the deciding factor in the standings. Of the four contestants finishing before the round ended, Radewoosh was the fastest. Well done!

I found problem B quite cute: you are given a nxm matrix such that n*m<=300000, with one character from the set A, C, G or T in each cell. You need to change as few characters as possible (to other characters from the set) in such a way that each 2x2 submatrix has all four different characters. Can you see a way?

In my previous summary, I've mentioned a couple of problems. The first one came from AtCoder: you are given a number k which is at most 1000. You need to come up with any nxn toroidal grid (the first row is adjacent to the last row, and the first column is adjacent to the last column), where you choose the number n but it must be at most 500, such that each cell of the grid is colored with one of the k colors, each color is used at least once, and for all pairs of colors i and j all cells with color i must have the same number of neighbors with color j.

During the round, I could only come up with approaches that produce a number of colors that is either smaller than n, or divisible by n. At some point I had an idea to write a backtracking solution that would find me any solution that does not have these properties, hoping that would help come up with its generalization. In retrospect, that might have done it, as the following approach (which seems to be the only one that works) does look recognizable from one example: let's pick an even n, and split the grid into n (toroidal) diagonals. For each diagonal, we either color it with one color, or with two alternating colors, thus making it possible to get any number of colors between n and 2n. Since each element of a diagonal has exactly two neighbors from each neighboring diagonal, the required properties hold.

The other problem came from Codeforces. I've cited a simplified version of the statement: there is a pile with n<=200000 stones, and two players are playing Nim with a fixed set of possible moves a1, a2, ..., ak: in each turn a player can take a number of stones that is equal to one of the ai, and the player who can't make a move loses. Your goal is to find the nimbers for all values of n between 1 and 200000.

I have forgot to mention one important additional property in this simplification: that the nimbers are guaranteed to be not too big (below 100), which is actually important for the solution below to be fast — sorry for that!

Let's put all possible moves in one bitmask m, and also store a bitmask for each nimber value that represents which pile sizes have a move to a position with that nimber value. Those bitmasks start as empty. In order to determine the nimber for the next pile size, we go through those bitmasks until we find one that has a zero in the corresponding position. Then we need to update the bitmasks for higher pile sizes, and the key trick is that we only need to update one of them: the one corresponding to the newly determined nimber, and the update is simply applying bitwise or with the move bitmask m shifted left by the current pile size. This means that the solution runs in O(n2/64+n*max_nimber) (I know, this is not a perfect use of the O-notation, but I hope you understand what I mean), which is fast enough.

Thanks for reading, and check back next week!

### And the best problem of 2018 is...

According to the vote, the best problem of 2018 is the Open Cup problem about rolling a ball in a maze while collecting all targets in some order, by jh05013, with solution in this post. My vote also went to this problem.

Here's its statement in an abridged form once again: you are given a 50x50 grid where some cells are empty, and some are walls. There are also walls surrounding the grid. Some empty cells are marked as targets, and there's also a ball in one of the empty cells. You can move the ball in each of the four cardinal directions, but once it starts rolling, it rolls in the same direction until it hits the wall, at which point you can pick a new direction, and so on. Can you roll the ball in such a way that it passes through all targets in some order?

Congratulations to jh05013, to the Open Cup, and thanks to all problemsetters of 2018!

More precisely, since there were only 91 people voting, I'd say we're about 60% confident that the best problem of 2018 is that one :) Here's the Colab where I try estimate that confidence. Of course that number is meaningless without explaining the assumed underlying model yada yada, but I hope it's good enough for a ballpark estimate. If people who actually, unlike myself, understand how statistics and polling work are reading this: is that a good way to get confidence numbers for a poll? What is a better way?

Thanks for reading, and check back later today for this week's summary!

## Wednesday, January 2, 2019

### Best problems of 2018

Just like last year, I have reviewed the problems I've highlighted in this blog, and have picked the shortlist of five problems that I think were the best in 2018 (in chronological order):
Which one do you think is the very best?

There are other discussions and votes about the best problem of 2018, check those out :) In that second post, snarknews is also announcing the Prime New Year contest, which consists of problems given at top-level team competitions in 2018 but not solved by anybody there. If you like real challenge and don't have much to do during the holidays, here's your link.

Happy New Year!