Last week had an Open Cup round for the fourth week in a row. Open Cup 2018-19 Grand Prix of America allowed teams from all over the world to participate in NAIPC 2019 (

problems,

results, top 5 on the left,

NAIPC results,

analysis). Just 3.5 hours were enough for team Past Glory to wrap the problemset up, a good 40 minutes before other teams could do the same. Congratulations on the win!

In

my previous summary, I have mentioned two (in some sense three :)) problems. The first group was from the AtCoder World Tour Finals: consider an infinite 2D grid, where each cell can be either black or white. Initially, exactly one cell was black, and then we repeatedly applied the following operation: take some integers

*x* and

*y*, and invert the color of three cells: (

*x*,

*y*), (

*x*+1,

*y*) and (

*x*,

*y*+1). You are given the set of black cells in the final state of the grid. There are at most 10

^{5} black cells in C1 and at most 10

^{4} in C2, and each black cell has coordinates not exceeding 10

^{17} by absolute value. Your goal is to find the coordinates of the only cell that was black originally. In problem C1 you know that its

*y*-coordinate was 0, and in C2 there are no further constraints.

A typical idea in this type of problem is to come up with an invariant that is not changed by the operation and that can be efficiently computed for source and target positions. A typical invariant that plays well with inversions is boolean or bitwise xor. Let's say that a white cell is a 0, a black cell is a 1, let's also pick some set of cells

*S*, then our invariant will be their bitwise xor (in other words, the parity of the number of black cells in

*S*).

Not any set

*S* works, though: we must make sure that for each invertible triple (

*x*,

*y*), (

*x*+1,

*y*) and (

*x*,

*y*+1) either 0 or 2 cells belong to

*S*, to make sure the xor does not change when we invert the triple. Suppose that for some row

*y*=

*y*_{0} the set

*S* contains exactly one cell (

*x*_{0},

*y*_{0}). The triple (

*x*_{0},

*y*_{0}), (

*x*_{0}+1,

*y*_{0}), (

*x*_{0},

*y*_{0}+1) must have 0 or 2 cells in

*S*, and since (

*x*_{0},

*y*_{0}) is in

*S* but (

*x*_{0}+1,

*y*_{0}) is not, (

*x*_{0},

*y*_{0}+1) must be in

*S*. Applying a similar argument, we find that in the row

*y*=

*y*_{0} exactly two cells must be in

*S*: (

*x*_{0},

*y*_{0}+1) and (

*x*_{0}-1,

*y*_{0}+1). Then we can compute which cells must be in S in the row

*y*=

*y*_{0}+2, and so on.

We can realize by looking at the resulting pattern, or by understanding what the process really is, that in general a cell (

*x*_{0}-

*k*,

*y*_{0}+

*n*) is in

*S* if and only if C(

*n*,

*k*) is odd. And that, in turn, is true if and only if

*n*&

*k*=

*k*, where & denotes bitwise and. There are still multiple ways to extend the set

*S* to rows with

*y*<

*y*_{0}, so to avoid thinking about that we will always pick very small

*y*_{0} that is below all interesting points.

Now it is very easy to compute our invariant for the input set of cells: we need to count the parity of the number of such (

*x*,

*y*) in the set that (

*y*-

*y*_{0})&(

*x*_{0}-

*x*)=

*x*_{0}-

*x*. But we know that the operations do not change the invariant, and that initially we had only one cell (

*x*_{A},

*y*_{A}). This means that we have an oracle that can tell us whether (

*y*_{A}-

*y*_{0})&(

*x*_{0}-

*x*_{A})=

*x*_{0}-

*x*_{A} for any two numbers

*x*_{0} and

*y*_{0} such that

*y*_{0}<=

*y*_{A}.

In problem C1, we know that

*y*_{A}=0, so we can just pick

*y*_{0} in such a way that -

*y*_{0} has all bits set except one, and then the oracle will effectively tell us the corresponding bit of

*x*_{0}-

*x*_{A}, so we can determine

*x*_{A} in logarithmic number of queries.

In problem C2 the things are not so simple. However, suppose we have found some

*x*_{0} and

*y*_{0} such that (

*y*_{A}-

*y*_{0})&(

*x*_{0}-

*x*_{A})=

*x*_{0}-

*x*_{A}, in other words we made our oracle return 1. Now we can go from the highest bits to the lowest bit, and by calling our oracle 3 additional times checking what happens if we add 2

^{k} to

*x*_{0} and/or subtract 2

^{k} from

*y*_{0}, we can determine the

*k*-th bit of

*y*_{A}-

*y*_{0} and

*x*_{0}-

*x*_{A}, then setting both to 0 and proceeding to the lower bits, and eventually recovering

*y*_{A }and

*x*_{A}.

The tricky part lies in the initial step of making the oracle return 1 for something: the probability of

*n*&

*k*=

*k* for random

*n* and

*k* is roughly 0.75

^{number_of_bits}, which is way too low to just stumble upon it. This is how far I got during the contest, so the remaining magic is closely based on the official editorial and my conversations with Makoto.

Instead of running the oracle for the set

*S* with just one cell (

*x*_{0},

*y*_{0}) in the row

*y*=

*y*_{0}, we will run it with the set

*S* which has all cells (

*x*_{0},

*y*_{0}) in the row

*y*=

*y*_{0 }where

*x*_{0} mod 3=

*u*,

*l*<=

*x*_{0}<=

*r*, where

*y*_{0},

*u*,

*l* and

*r* are the parameters of the oracle. This requires counting the number of

*x*_{0 }satisfying the above criteria and also the constraint (

*y*-

*y*_{0})&(

*x*_{0}-

*x*)=

*x*_{0}-

*x* for each black cell in the input, which can be done by a relatively standard dynamic programming on the bitwise representation of

*x*_{0}.

As we know, this will give us the parity of the amount of such

*x*_{0} that

*x*_{0} mod 3=

*u*,

*l*<=

*x*_{0}<=

*r*, and (

*y*_{A}-

*y*_{0})&(

*x*_{0}-

*x*_{A})=

*x*_{0}-

*x*_{A}. A crucial observation is: no matter what

*y*_{0} we pick, if

*l *is very small and

*r* is very large, then for at least one

*u* from the set {0, 1, 2} this oracle will return 1! This can be proven by induction by

*y*_{A}: when

*y*_{A}=

*y*_{0}, (

*y*_{A}-

*y*_{0})&(

*x*_{0}-

*x*_{A})=

*x*_{0}-

*x*_{A} only when

*x*_{0}=

*x*_{A}, so the oracle will return 1 when u=

*x*_{A} mod 3 and zero in the other two cases. When

*y*_{A }increases by one, given our C(

*n*,

*k*)-like propagation, every

*x*_{0 }for which the oracle returns 1 contributes one to itself and

*x*_{0}+1, which means that we go from parities (

*a*,

*b*,

*c*) for the three remainders modulo 3 to parities (

*a*+

*b*,

*b*+

*c*,

*a*+

*c*), so from (1, 0, 0) to (1, 0, 1), and then to (1, 1, 0), and then to (0, 1, 1), and then to (1, 0, 1) and so on, and never get (0, 0, 0).

This means that in at most three attempts we can make this complex oracle return 1. Now (more magic incoming!) we can do a binary search: if for (

*y*_{0},

*u*,

*l*,

*r*) the oracle returns 1, then it must return 1 for exactly one of (

*y*_{0},

*u*,

*l*,

*m*) and (

*y*_{0},

*u*,

*m*+1,

*r*). This way we can find a single cell (

*x*_{0},

*y*_{0}) for which the oracle returns 1 in a logarithmic number of requests to the complex oracle, and then switch to using the simple oracle and proceed with reconstructing the answer as described above, completing the solution of C2.

I have also mentioned an Open Cup problem: you have some amount

*x* of money between 0 and 1. You're playing a betting game where in one turn, you bet some amount

*y*, and with probability

*p* (

*p*<0.5) your amount of money becomes

*x*+

*y*, and with probability 1-

*p* it becomes

*x*-

*y*. Your bet must not exceed your current amount of money. Your goal is to reach amount 1. So far this setup is somewhat standard, but here comes the twist: your bets must be non-decreasing, in other words at each turn you must bet at least the amount you bet in the previous turn. In case you don't have enough money for that, you lose. What is the probability of winning if you play optimally? More precisely, what is the supremum of the set of probabilities of winning of all possible strategies? Both

*x* and

*p* are given as fractions with numerator and denominator not exceeding 10

^{6}, and you need to return the answer using division modulo 998244353.

You can find my approach to solving it in

this Codeforces comment.

Thanks for reading, and check back for this week's summary!