Hello.
I understand that 8bit/channel means that each RGB channel has 256 luminance levels (don't know how else to call them) associated to it. I know that my 30D captures 12 bit/channel RAW and if I select 16bit/channel in ACR I may use the full 4096 luminance levels for every channel. OK, that being said, shouldn't I notice a difference in shadow/highlight clipping?
Here's what I think: Let's assume that you have 4 light levels in your picture (2 bit encoding). The first and the 2nd are clipped (value=0), the 3rd and 4th are not.
Now if you'd have 8 light levels (3 bit encoding) the 5th, 6th, 7th and 8th levels would not be clipped, obviously. What about the first 4? Well, maybe just the first 3 levels would be clipped, but the 4th wouldn't be. This could happen because "half" of the 2nd level in the previous 2 bit encoding would have been clipped, but because this was digital encoding, the device couldn't have "known".
So, based on my example, it's safe to say that increasing the number of bits used to encode the image, you'd have a smaller and smaller part of your image being clipped (so a higher dynamic range).
So what am I doing wrong? I see absolutely no difference in ACR between 8bit and 16 bit(12 bit) with both shadow and highlight clipping warnings. And considering the fact that 8 bit means 256 levels and 12 bit means 4096 levels, I'm pretty sure I should have seen a difference. It's absolutely impossible to be no difference in clipped areas because if the encoding used would be of n bits, where n is infinite, than the infinite sized picture would have approximately 0 clipped pixels.
Phew, sorry for such a boring question and for my broken English, but an answer would be very, very appreciated
EDIT: I just now realized that the RGB indicator beneath the histogram in ACR shows only 256 levels...So that means it still shows 8 bit even if 16 bit is selected...so...uh...I'm lost...



