## Tuesday, July 21, 2020

### An extract method week

AtCoder Grand Contest 044 was the main event of the May 18 - May 24 week (problems, results, top 5 on the left, analysis). I have started by diving into the problem E, implementing what seemed to be a reasonable greedy but repeatedly getting a wrong answer, and my inability to give up on trying reasonable-looking ideas without proof has ruined the whole contest. To add insult to injury, it turned out that the last idea that I wanted to try (also without proof) when the contest ended could actually get the problem accepted :) tourist, on the other hand, came up with a solution to this problem in a normal manner, without trying random ideas, and won the round. Well done!

Here is the problem statement: there are n<=200000 cells arranged in a circle. The i-th cell has two numbers associated with it, ai and bi (0<=ai<=1012, 0<=bi<=100). We put a token into one of the cells uniformly at random, and our initial score is zero. Then, we repeatedly face the following choice: we either add ai to our score and end the process (where i is the number of the cell the token is currently in), or we subtract bi from our score, move the token uniformly at random into one of the two adjacent cells, and continue the process. What is the maximum expected score we can get if we play optimally?

Let me also use this summary to update on my C++ experiment. For about half a year, I have been using exclusively C++ on all platforms except TopCoder (I kept using Java on TopCoder because JHelper lacks TopCoder integration). From the only objective measure I could come up with, the experiment doesn't seem to go so well: while on TopCoder I've managed to qualify for the TCO and kept the second place in the overall rating, I've dropped out of the top 10 on Codeforces, my campaign to qualify for AtCoder World Tour Finals 2021 is not in the best shape, and our Open Cup results seem to have taken a turn for the worse in the second half of the season. Of course, all these numbers might well be statistically insignificant if one tries to dig in deeper.

Subjectively, I definitely feel that I'm fighting less with the time limits and can focus on implementing the algorithm, however I also feel that my coding speed dropped quite substantially as I have to think more about every line of code I write, and also as C++ support in JetBrains tools is still being improved to the level of Java support (for example, "Extract Method" which I use extensively used to suggest weird parameter names, but it seems to be fixed now). I wonder if it's possible to quantify the coding speed drop based on the screencasts :)

I'm still hopeful that the coding speed will improve, and that CLion will have all necessary features soon (or already has them), so I think I will continue the experiment at least until the end of the year. Please share any observations or suggestions that you have about this, and of course check back for the next week's summary!