TCO 2014 Round1A Editorial

The 5. April, 2014 marked 20 years since the death of <a href="">Kurt Cobain</a>, but also the planned beginning of the Algorithm matches of TopCoder Open 2014.

This year there were several minor changes to the TCO rules. One of them was that the maximal number of participants for Round1s was increased to 2500 and more people got a bye. That didn't prevent the slots of running out about 10 minutes before the start of the competition, though. Additionally, the other new thing was the parallel round for the people who got a bye.

Apparently the larger-than-usual number of participants played its role and there were some minor (*cough*) problems at the beginning of the match. First, the room assignments took longer than 5 minutes, thus leaving the contestants outside the competition rooms when the timer started ticking. This seemed like a small problem – definitely not something we haven't seen before *ahem*. A bit later, though, the Arena (maybe as a tribute and sign of respect for the great Nirvana's lead singer?), decided to commit a suicide and not let anyone in. Ths in turn lead to the match being postponed for a week later – the 12. April, 2014. What a coincidence, that's exactly my birthday! Happy birthday to me, yey :) Speaking of which, I totally forgot to introduce myself – I am <a href=><span style="weight: bold; color: #CC1C23;">espr1t</span></a>, and I was your author of the problems today.

TODO: additional stuff about the competition.


The first problem provided a good opportunity for quick first submissions. Although the problem might seem hard at first, in fact the solution is quite easy and short!

There were two observations that people had to make:

  1. We can use the device as many times as we want. Thus we can use it even to do steps we don't have to!
  2. If we use the device multiple times, the indices we select will be decreasing.

Observation 2) might not be so obvious at first, but it is quite logical.  Since once we use the device at some index i, all characters from i+L to the right get removed, then the only valid indices that remain on which we can use the device are {i, i-1, …, 0}. But as we've already used it on i, the characters there are already sorted and the characters to the right of i+L are already trimmed. Thus, a new use there won't change the string at all. So, in fact, it only makes sense to use the device on some of the indices {i-1, i-2, …, 0}. (This observation could have also been made by spotting what we do on the examples.)

That said; let's inspect closely what happens when we use the device. We "drag" all the small letters to the front of the interval we use the device on and move all large ones to the back. Additionally, the string is shortened (as long as we are not using the device on the last L characters). That is a double win for us, because it (potentionally) moves smaller characters to the front and makes the string shorter, both of which lead to lexicographically smaller strings.

Now since we've concluded that using the device is good, the observation 1) comes into play. Why not use it as many times as we can? We showed that it doesn't make sense to use it twice on the same index, which means we should use it at most (|S| - L + 1) times. In fact, we will use it *exactly* (|S| - L + 1) times - on all the indices we can, without repeating an index.

Let the initial length of S be P = |S|. We will use the device on the index P-L, then on P-L-1, then on P-L-2 and so on, until reaching index 0. Thus we start from the rightmost position we can and on each step move one position to the left. In practice, this sweeps the entire string from right to left and carries all the small characters to the front.

For example, let's consider the sample from the problem statement. There S = "ABRACADABRA" and L = 5. The sweep looks as follows:

  7. [AAAAA]B -> AAAAA

It turns out that this strategy yields the optimal answer! Indeed, with it we've gathered all the small characters from the string and brought them to the front! The simulation of this algorithm is 3 lines of code in C++ and has time complexity of O(|S| * L * log(L)). This was more than enough for the small constraints of the problem.

    string getMin(string S, int L) {

        for (int i = (int)S.size() - L; i >= 0; i--)

            sort(S.begin() + i, S.begin() + i + L);

        return S.substr(0, L);


You may ask, if we can bring all the small characters to the front, why not just sort the entire string and return the first L characters? That's indeed the tricky part of the problem. Fortunately for the competitors, the last example covered this case – but I suppose without it this problem might have turned out a massacre in the challenge and system test phases.

The answer is that we can gather all L smallest characters from the string but one – we should always include the first character of S in the answer. Indeed, there is no way how to remove it! Thus the answer can also be obtained by the following algorithm:

  1. Sort all characters in S from position 1 (0-indexed) to the end;
  2. Trim the result from step 1 to length L;
  3. Sort the result from step 2 and return it.

This solution is a bit faster - its time complexity is O(|S| * log(|S|)).

    public String getMin(String S, int L) {

        char[] asChars = S.toCharArray();

        Arrays.sort(asChars, 1, asChars.length);

        Arrays.sort(asChars, 0, L);

        return new String(asChars, 0, L);


In the end, since the alphabet is very limited we can even achieve a linear (i.e. O(|S|)) solution by using counting sort, instead of comparison-based sort. For such small constraints, though, this was a total overkill.


This is a very, very evil problem. Not because it's hard, or because it has a nasty corner case, or even because the examples are weak (they aren't), but because it is misleading.

The constraints on L would have *totally* tricked me into writing a much harder solution (based on dynamic programming, which I'll explain later below). TopCoder (and other competitions) have taught me to look at the constraints in order to assume the required time complexity and thus the solution. And with such a small number (only 9 in each direction, thus 19 possible fields for each tile) I, and I guess many other people, would go for a bitmask-based dynamic programming solution with a state of [50][2^19].

Guess what! The intended solution is *sooo* much simpler! :) It turns out that we can apply a greedy strategy, which is much shorter, easier, and faster.

We will start filling the final configuration of tiles in Elly's holder (and thus, the resulting string) from left to right. (Actually, the proper word for "holder" in the game of Scrabble is "rack", but you will probably agree that "Elly has a rack" could have been interpreted in a *very* wrong way xD.) Since we want the lexicographically smallest resulting string, obviously the tile that ends up there should be the one with the smallest letter (and distance <= maxDistance). We should also keep track of which tiles we have already used (put in their final positions) so we don't use them twice. Finally, we should also make sure that at no point there is a tile that cannot be put anywhere anymore. So, the greedy algorithm we will use is as follows:

  1. Initially, mark all tiles as unused.
  2. For each position i (in increasing order, i.e. from left to right) of the final configuration:
  1. Check if the tile at position i – maxDistance in the initial configuration is not yet used. If that is the case, position i is the only one left for it and we *must* put it there, so we do that.
  2. Otherwise, search for the smallest unused letter with distance <= maxDistance from the current position. If several unused tiles are tied for the smallest letter, pick the leftmost one. Mark the selected tile as used and append its character to the resulting string.

The string obtained from the tiles in the order we selected them is the final answer.

Put into code, this looks as follows:

    public String getMin(String letters, int maxDistance) {

        char[] ans = new char[letters.length()];

        boolean[] used = new boolean[letters.length()];

        for (int i = 0; i < letters.length(); i++) {

            int idx = -1;

            if (i - maxDistance >= 0 && !used[i - maxDistance]) {

                idx = i - maxDistance;


            else {

                for (int c = Math.max(0, i - maxDistance); c <= Math.min(letters.length() - 1, i + maxDistance); c++)

                    if (!used[c] && (idx == -1 || letters.charAt(c) < letters.charAt(idx)))

                        idx = c;


            ans[i] = letters.charAt(idx);

            used[idx] = true;


        return new String(ans);


At first it may be somewhat surprising that this greedy works. We should see that since all possible resulting strings are with the same length, the only "lexicographically smaller" rule that stands is "has a smaller letter at the first position they differ". So, if we make an optimal decision for some earlier position (and there is still an answer for the remaining positions), we shouldn't care how the rest of the string will turn out later. The only exception to that is when we can choose from multiple tiles with the same letter (for some position i). But in this case it is easy to see why we should prefer the leftmost one: since it will be the first one we might end up be "forced" to put somewhere else (step 2a. from our algorithm) later on, we want to get rid of it as early as possible.

This algorithm is indeed correct and has time complexity of O(N * maxDistance), where N is the number of tiles Elly has. For the given constraints, this runs insanely fast. In fact, it would run quite fast even if maxDistance was not restricted to 9, but was arbitrary large!

What's more, with the same algorithm (but a bit different implementation) we could have solved the problem sufficiently fast even for N and maxDistance <= 1,000,000. In the solution above we used a linear search to find the tile with the smallest character. But in fact, we could have used a priority queue (a heap) for the same purpose! That would yield a O(N*log(N)) solution.

Similarly to the previous problem, we can exploit the small alphabet and do a "counting priority queue" – an array of 26 queues in which we put each character we meet, in the order we meet them. This way we would improve our runtime to O(N) (assuming the size of the alphabet is a constant, but in this case it is fixed to 26, so it is). Of course, this was not required in this problem, but could have solved the problem with N as large as several millions and arbitrary large maxDistance.

Now let's look at the alternative solutions. As we already mentioned, the competitors could have benefitted from the small constraints on maxDistance and use a bitmask DP to solve the problem. The state of the DP is [50][2^19], where the first dimension is the index we are currently filling, and the second dimension is the bitmask of which tiles we have already used. Its size is 2^19 since we need information about maxDistance tiles to the left, the one at the current position, and maxDistance tiles to the right, which makes 9 + 1 + 9 = 19 (in fact we can do with 18, but let's not go there). This solution has a time complexity of O(N * maxDistance * 2^(maxDistance*2)). It also runs in time (although on the edge), but is much harder (and longer) to implement.

The third viable solution is based on the network flow theory. We create a bi-partite graph between the tiles and their ending positions (thus we have N nodes on the left and N nodes on the right). Each node from the left side (a tile in the initial configuration) is connected to at most maxDistance * 2 + 1 nodes from the right side (a possible ending position). As long as there is a flow with size N, there is a way to distribute the tiles into the positions. Obviously, in the beginning that's always the case. However, for each position we might try and say "Let's put the tile with index i into the final position j (where |i-j| <= maxDistance) and see if there is still an answer for the remaining tiles". We remove node i from the left side and node j from the right side of the bipartite graph and try running the flow again. This time, we expect it to be N-1 (since we've already put one of the tiles in its final position). If that's okay we can try setting the place of another tile and so on. As long as our decision leaves the flow (together with the tiles we've manually put somewhere) N, we are good.

This solution required some work to get working (and uses theory way beyond what's expected for this slot). I didn't expect many people to actually write it, but I expected that at least some might. It would work, since in the worst case we run N * maxDistance * 2 flows, each of which is O(N^2 * maxDistance) (we have O(N) nodes and O(N*maxDistance) edges; the simplest bipartite matching algorithm is O(V*E)). This means the overall time complexity of the algorithm is O(N^3 * maxDistance^2), which would still fit in the time limit (especially considering that the required number of operations is in fact much smaller).

Finally, a bit of funny trivia: did you notice that the method name and signature (a string and an int) were exactly the same as in the previous problem? In fact, even the *examples* were the same (with the first and last one swapped, because "TOPCODER" was easier to explain in the 500). Talk about being lazy! Nevertheless, the examples for both problems were rather strong, so (in this case) people should not complain about my laziness.


The third and hardest problem from the set in Round 1A involved the quite rare and exotic for TopCoder topic of… (drumroll)… Dynamic Programming! This problem also had a 12 line solution, which is still rather unobvious for me why it works (it's author was <a href=""><span style="weight: bold; color: #FF9900;">mystic_tc</span></a>, but apparently it does. So one could solve the entire problem set in less than 30 lines of contestant's code! How awesome is that? Anyway, my intended solution was a bit longer, but much easier to prove why it works.

First, we must see what we know and what we don't. Since each button always changes the same set of lamps, and we can press each button any number of times, we can initially press each button once to see which lamps are affected by it, and then press it again to reverse its effect. In the end we will be in the initial configuration of lit lamps, but we will know which lamps are influenced by each button.

The brain twist here is that we must consider *all* possible sets for each button in order to find "the worst possible case". So, in fact, we don't know which button changes the state of which lamps, even though there is a way to find it out *for a particular case*. In other words we only know that we will have the information for each button, so we can make optimal (perfect) choices during our calculations.

Since we can click on the buttons in any order, and also it doesn't really matter in which order we do it, we can as well choose one that is convenient for us. Let's do it from left to right (i.e. in increasing order).

If we only click on the lamps in increasing order (once we've clicked on all of them twice to know which button changes the state of which lamps), then being, for example, at button 5, there is no way to change the state of lamps 0, 1, 2, and 3 anymore. Thus, the result for the subset of lamps {0, 1, 2, 3} does not depend on any further action we take. That's something we usually try to achieve in dynamic programming problems.

So this (which button are we currently considering whether to click or not) will surely be part of the state. But what additional info do we need?

It turns out there are many possibilities. In this problem we will do something somewhat unorthodox. Since button i might affect button i+1, and vice-versa – button i+1 might affect button i, we will consider states *between* two buttons. Looking at it from another angle, this way we will consider two buttons at once, but not the (up to) two buttons surrounding them.

Our state will be [idx][lit][addU][addN], where:

  1. idx is the index of the left of the two buttons we are considering (an integer between 1 and N-1, inclusive, where N is the number of lamps);
  2. lit is whether the lamp with index idx is currently lit (a Boolean);
  3. addU is the value we will add to the answer from the lamp to the left of idx (i.e. the one with index idx-1) if we use the button with index idx (an integer which is either 0 or 1);
  4. addN is the value we will add to the answer from the lamp to the left of idx (i.e. the one with index idx-1) if we do not use the button with index idx (an integer which is either 0 or 1);

Thus, our DP table is 4-dimensional with sizes [50][2][2][2].

At this point we know for sure that the right lamp of the two we are considering is with its initial state. Since we haven't decided yet whether or not to click on idx, and we will consider higher indices later, there is no chance that we have influenced it in any way. We have the information whether the lamp with index idx is lit in one of the DP variables (lit). We don't know whether the lamp with index idx-1 is lit or not, but we know whether it *will be* if we click or don't on the button idx. (Note that this is crucial and non-obvious. Since we are considering states between buttons, we have already selected whether idx influences idx-1 or not in the previous step of the DP. Also we have already decided whether to click on the buttons idx-1 and idx-2 and know whether idx-2 influences idx-1. The only thing we *actually* need from all this is simply whether clicking on the button idx will leave it lit or not.)

As we already mentioned, we have to consider *all* possible scenarios of whether button idx changes the state of lamp idx+1 and whether button idx+1 changes the state of lamp idx. Since we are interested in the worst possible case, out of those four branches we select the one with the maximal answer.

We do, however, have the choice of whether or not to click on button idx. Of course, since it is a choice, we will select the one that results in a smaller final answer.

Thus, for every state there are three nested things we should set:

  1. Whether or not button idx changes the state of lamp idx+1
  2. Whether or not button idx+1 changes the state of lamp idx
  3. Based on 1), 2) (and also lit, addN, and addU) whether or not to click on the button idx.

As with most DPs, the rest is rather easy. We just do a recursion, passing as arguments the state (i.e. the variables idx, lit, addN, and addU) and memoize it. Inside the body of the recursion we do three nested loops that set the three things above. The code for that is actually surprisingly short:

int lamps[MAX], n;

int dyn[MAX][2][2][2];

int recurse(int idx, int lit, int addU, int addN) {

    if (idx == n - 1)

        return min(lit + addN, !lit + addU);

    if (dyn[idx][lit][addU][addN] != -1)

        return dyn[idx][lit][addU][addN];


    int ans = 0;

    for (int changeLeft = 0; changeLeft < 2; changeLeft++) {

        for (int changeRight = 0; changeRight < 2; changeRight++) {

            ans = max(ans, min(

                // Use left button

                recurse(idx + 1, lamps[idx + 1] ^ changeRight, lit ^ changeLeft ^ 1, lit ^ 1) + addU,

                // Don't use left button

                recurse(idx + 1, lamps[idx + 1], lit ^ changeLeft, lit) + addN




    return dyn[idx][lit][addU][addN] = ans;


We should invoke the recursion with lamp 0 (0 and 1 are the first pairs of buttons), the current state of lamp 0. Additionally, since there are no lamps to the left of lamp 0, addU and addN will be both 0. Thus, the initial call of the recursion (and the rest of the code) is:

int getMin(string lamps_) {

    n = (int)lamps_.size();

    for (int i = 0; i < n; i++)

        lamps[i] = (lamps_[i] == 'Y');

    memset(dyn, -1, sizeof(dyn));

    return recurse(0, lamps[0], 0, 0);


The time complexity of this solution if O(N), where N is the number of lamps. Indeed since the number of lamps each button can influence is a constant (3) our state is 2^3 times larger, but this is also a constant and can be ignored. Finally, for each state we do another 2^3 operations, but this is also a constant. In the end we end up with a constant of 2^6, which, although in theory should be ignored, I would recommend competitors to consider during competitions (it is, after all, 64 times slower!). But for so small N it runs sufficiently fast.

The alternative solution (by <a href=""><span style="weight: bold; color: #FF9900;">mystic_tc</span></a>) is also a DP, but with a smaller state. It is not that hard to see that the result is greater than or equal to what his code finds, but it seems it is indeed equality (which we couldn't prove formally):

public int getMin(String lamps) {

    int N = lamps.length();

    int[] data = new int[N];

    for (int i=0; i < N; i++) data[i] = lamps.charAt(i) == 'Y' ? 1 : 0;

    int[] dp = new int[N+1];

    dp[0] = 0;

    for (int i=0; i<N; i++) {

        dp[i + 1] = dp[i];

        int sum = data[i];

        for (int j = i - 1; j >= 0; j--) {

            sum += data[j];

            dp[i + 1] = Math.max(dp[i + 1], dp[j] + sum % 2);



    return dp[N];