19 September 2016

This is a interesting exercise which I found did baffle me for a few days, although you might argue that it should be pretty simple for any developer, maybe even so for fresh ones out of university.

The problem is how you expand a 8 bit number into a 10 bit one? Ex.: you have an input of 8 bits (0..255) but need to feed it to something that has higher resolution (0..1023). You don’t just send whatever you get because that would mean your output is 4 times weaker then the input. The real practical application I was struggling with was receiving RGB values as input (3 bytes), and trying to drive a LED that had 10 bits of resolution for each color. If I was to send to the LED driver what I got from RGB, my LED would be just slightly bright, instead of full-on maximum it could get.

Solution 1

Of course the first solution was the naive one - multiply the input value with 4. But that doesn’t fully turn it on at maximum, ex: at input of 255, multiplied by 4 the output is 1020. That’s pretty close but I want the remaining 3 steps to achieve it’s maximum brightness.

Solution 2

I’m ashamed to admit it but I went for first solution while the problem was pushed somewhere to a background thread in my mind. In a few days, it finally hit me where I was missing the 3 points.

what you do when you multiply with 4 is actually a shift left operation: 1111111 « 2 with the result of 1111111100. To achieve the full range of 10 bits, you need to take the two bits of the input number and add it to the least 2 of the output: (11111111 « 2) + (11111111 » 6).

Let’s try it with another number. Let’s input the number 10000000 => 10000000 « 2 + (10000000 » 6) => 1000000000 + 10 = 1000000010

Need the code in C?:

    int expand10(byte value) {
      //ROL 2 and then ROR 6 to add the 2 upper bits as the lower ones
      int expanded = (value<<2) + (value>>6);
      return expanded;
    }


blog comments powered by Disqus